Test Report: KVM_Linux_crio 17114

                    
                      51f3d9893db86a392fa9064ae9bce74bae887273:2023-08-30:30790
                    
                

Test fail (27/288)

Order failed test Duration
25 TestAddons/parallel/Ingress 162.3
36 TestAddons/StoppedEnableDisable 155.33
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 163.34
200 TestMultiNode/serial/PingHostFrom2Pods 3.13
206 TestMultiNode/serial/RestartKeepsNodes 688.22
208 TestMultiNode/serial/StopMultiNode 142.97
215 TestPreload 178.56
221 TestRunningBinaryUpgrade 164.74
237 TestStoppedBinaryUpgrade/Upgrade 293.45
257 TestPause/serial/SecondStartNoReconfiguration 58.29
270 TestStartStop/group/no-preload/serial/Stop 139.67
272 TestStartStop/group/embed-certs/serial/Stop 140.1
275 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.49
276 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
277 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
279 TestStartStop/group/embed-certs/serial/SecondStart 410.04
282 TestStartStop/group/old-k8s-version/serial/Stop 139.48
283 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
287 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 596.21
288 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.95
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.87
290 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 542.86
291 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 560.12
292 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 423.47
293 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 267.85
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 128.85
x
+
TestAddons/parallel/Ingress (162.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-585092 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context addons-585092 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.700196772s)
addons_test.go:208: (dbg) Run:  kubectl --context addons-585092 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-585092 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2c0ac936-0527-4f27-a95f-78dd93c2afab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2c0ac936-0527-4f27-a95f-78dd93c2afab] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.012100683s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-585092 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.706746223s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-585092 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.136
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-585092 addons disable ingress-dns --alsologtostderr -v=1: (1.229294624s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-585092 addons disable ingress --alsologtostderr -v=1: (7.985344562s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-585092 -n addons-585092
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-585092 logs -n 25: (1.20912879s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-953651 | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC |                     |
	|         | -p download-only-953651        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-953651 | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC |                     |
	|         | -p download-only-953651        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC | 30 Aug 23 21:09 UTC |
	| delete  | -p download-only-953651        | download-only-953651 | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC | 30 Aug 23 21:09 UTC |
	| delete  | -p download-only-953651        | download-only-953651 | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC | 30 Aug 23 21:09 UTC |
	| start   | --download-only -p             | binary-mirror-727468 | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC |                     |
	|         | binary-mirror-727468           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39809         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-727468        | binary-mirror-727468 | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC | 30 Aug 23 21:09 UTC |
	| start   | -p addons-585092               | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC | 30 Aug 23 21:11 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | addons-585092 addons           | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:12 UTC | 30 Aug 23 21:12 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:12 UTC | 30 Aug 23 21:12 UTC |
	|         | addons-585092                  |                      |         |         |                     |                     |
	| addons  | addons-585092 addons disable   | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:12 UTC | 30 Aug 23 21:12 UTC |
	|         | helm-tiller --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| ip      | addons-585092 ip               | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:12 UTC | 30 Aug 23 21:12 UTC |
	| addons  | addons-585092 addons disable   | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:12 UTC | 30 Aug 23 21:12 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:12 UTC | 30 Aug 23 21:12 UTC |
	|         | -p addons-585092               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ssh     | addons-585092 ssh curl -s      | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:12 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                      |         |         |                     |                     |
	|         | nginx.example.com'             |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:12 UTC | 30 Aug 23 21:12 UTC |
	|         | addons-585092                  |                      |         |         |                     |                     |
	| addons  | addons-585092 addons           | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:13 UTC | 30 Aug 23 21:13 UTC |
	|         | disable csi-hostpath-driver    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-585092 addons           | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:13 UTC | 30 Aug 23 21:13 UTC |
	|         | disable volumesnapshots        |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-585092 ip               | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:14 UTC | 30 Aug 23 21:14 UTC |
	| addons  | addons-585092 addons disable   | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:14 UTC | 30 Aug 23 21:14 UTC |
	|         | ingress-dns --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-585092 addons disable   | addons-585092        | jenkins | v1.31.2 | 30 Aug 23 21:14 UTC | 30 Aug 23 21:14 UTC |
	|         | ingress --alsologtostderr -v=1 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:09:34
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:09:34.683884  962961 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:09:34.683995  962961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:09:34.684000  962961 out.go:309] Setting ErrFile to fd 2...
	I0830 21:09:34.684005  962961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:09:34.684215  962961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 21:09:34.684819  962961 out.go:303] Setting JSON to false
	I0830 21:09:34.685719  962961 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10322,"bootTime":1693419453,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 21:09:34.685780  962961 start.go:138] virtualization: kvm guest
	I0830 21:09:34.688289  962961 out.go:177] * [addons-585092] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 21:09:34.689582  962961 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 21:09:34.690974  962961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:09:34.689597  962961 notify.go:220] Checking for updates...
	I0830 21:09:34.693810  962961 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:09:34.695224  962961 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:09:34.696680  962961 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 21:09:34.697837  962961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:09:34.699230  962961 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:09:34.731742  962961 out.go:177] * Using the kvm2 driver based on user configuration
	I0830 21:09:34.733007  962961 start.go:298] selected driver: kvm2
	I0830 21:09:34.733017  962961 start.go:902] validating driver "kvm2" against <nil>
	I0830 21:09:34.733029  962961 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:09:34.733752  962961 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:09:34.733820  962961 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 21:09:34.747864  962961 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 21:09:34.747920  962961 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 21:09:34.748118  962961 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 21:09:34.748152  962961 cni.go:84] Creating CNI manager for ""
	I0830 21:09:34.748158  962961 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 21:09:34.748166  962961 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0830 21:09:34.748174  962961 start_flags.go:319] config:
	{Name:addons-585092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-585092 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:09:34.748318  962961 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:09:34.750192  962961 out.go:177] * Starting control plane node addons-585092 in cluster addons-585092
	I0830 21:09:34.751394  962961 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:09:34.751423  962961 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0830 21:09:34.751430  962961 cache.go:57] Caching tarball of preloaded images
	I0830 21:09:34.751512  962961 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 21:09:34.751524  962961 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 21:09:34.751881  962961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/config.json ...
	I0830 21:09:34.751906  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/config.json: {Name:mkf5f3e50fdacb325e2215e2dfdcb299c96737f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:09:34.752062  962961 start.go:365] acquiring machines lock for addons-585092: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 21:09:34.752106  962961 start.go:369] acquired machines lock for "addons-585092" in 30.778µs
	I0830 21:09:34.752127  962961 start.go:93] Provisioning new machine with config: &{Name:addons-585092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:addons-585092 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:09:34.752222  962961 start.go:125] createHost starting for "" (driver="kvm2")
	I0830 21:09:34.754562  962961 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0830 21:09:34.754673  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:09:34.754721  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:09:34.768858  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0830 21:09:34.769387  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:09:34.770141  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:09:34.770173  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:09:34.770580  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:09:34.770754  962961 main.go:141] libmachine: (addons-585092) Calling .GetMachineName
	I0830 21:09:34.770927  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:09:34.771102  962961 start.go:159] libmachine.API.Create for "addons-585092" (driver="kvm2")
	I0830 21:09:34.771136  962961 client.go:168] LocalClient.Create starting
	I0830 21:09:34.771181  962961 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem
	I0830 21:09:34.945740  962961 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem
	I0830 21:09:35.192996  962961 main.go:141] libmachine: Running pre-create checks...
	I0830 21:09:35.193026  962961 main.go:141] libmachine: (addons-585092) Calling .PreCreateCheck
	I0830 21:09:35.193576  962961 main.go:141] libmachine: (addons-585092) Calling .GetConfigRaw
	I0830 21:09:35.194054  962961 main.go:141] libmachine: Creating machine...
	I0830 21:09:35.194071  962961 main.go:141] libmachine: (addons-585092) Calling .Create
	I0830 21:09:35.194234  962961 main.go:141] libmachine: (addons-585092) Creating KVM machine...
	I0830 21:09:35.195353  962961 main.go:141] libmachine: (addons-585092) DBG | found existing default KVM network
	I0830 21:09:35.196218  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:35.196014  962983 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298c0}
	I0830 21:09:35.201935  962961 main.go:141] libmachine: (addons-585092) DBG | trying to create private KVM network mk-addons-585092 192.168.39.0/24...
	I0830 21:09:35.268871  962961 main.go:141] libmachine: (addons-585092) DBG | private KVM network mk-addons-585092 192.168.39.0/24 created
	I0830 21:09:35.268905  962961 main.go:141] libmachine: (addons-585092) Setting up store path in /home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092 ...
	I0830 21:09:35.268921  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:35.268852  962983 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:09:35.268940  962961 main.go:141] libmachine: (addons-585092) Building disk image from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 21:09:35.269099  962961 main.go:141] libmachine: (addons-585092) Downloading /home/jenkins/minikube-integration/17114-955377/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0830 21:09:35.497607  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:35.497446  962983 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa...
	I0830 21:09:35.674985  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:35.674848  962983 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/addons-585092.rawdisk...
	I0830 21:09:35.675019  962961 main.go:141] libmachine: (addons-585092) DBG | Writing magic tar header
	I0830 21:09:35.675041  962961 main.go:141] libmachine: (addons-585092) DBG | Writing SSH key tar header
	I0830 21:09:35.675064  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:35.675022  962983 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092 ...
	I0830 21:09:35.675203  962961 main.go:141] libmachine: (addons-585092) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092
	I0830 21:09:35.675231  962961 main.go:141] libmachine: (addons-585092) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092 (perms=drwx------)
	I0830 21:09:35.675252  962961 main.go:141] libmachine: (addons-585092) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines
	I0830 21:09:35.675274  962961 main.go:141] libmachine: (addons-585092) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines (perms=drwxr-xr-x)
	I0830 21:09:35.675287  962961 main.go:141] libmachine: (addons-585092) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube (perms=drwxr-xr-x)
	I0830 21:09:35.675294  962961 main.go:141] libmachine: (addons-585092) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377 (perms=drwxrwxr-x)
	I0830 21:09:35.675305  962961 main.go:141] libmachine: (addons-585092) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0830 21:09:35.675316  962961 main.go:141] libmachine: (addons-585092) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0830 21:09:35.675400  962961 main.go:141] libmachine: (addons-585092) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:09:35.675434  962961 main.go:141] libmachine: (addons-585092) Creating domain...
	I0830 21:09:35.675450  962961 main.go:141] libmachine: (addons-585092) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377
	I0830 21:09:35.675466  962961 main.go:141] libmachine: (addons-585092) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0830 21:09:35.675474  962961 main.go:141] libmachine: (addons-585092) DBG | Checking permissions on dir: /home/jenkins
	I0830 21:09:35.675485  962961 main.go:141] libmachine: (addons-585092) DBG | Checking permissions on dir: /home
	I0830 21:09:35.675494  962961 main.go:141] libmachine: (addons-585092) DBG | Skipping /home - not owner
	I0830 21:09:35.676378  962961 main.go:141] libmachine: (addons-585092) define libvirt domain using xml: 
	I0830 21:09:35.676397  962961 main.go:141] libmachine: (addons-585092) <domain type='kvm'>
	I0830 21:09:35.676407  962961 main.go:141] libmachine: (addons-585092)   <name>addons-585092</name>
	I0830 21:09:35.676415  962961 main.go:141] libmachine: (addons-585092)   <memory unit='MiB'>4000</memory>
	I0830 21:09:35.676425  962961 main.go:141] libmachine: (addons-585092)   <vcpu>2</vcpu>
	I0830 21:09:35.676438  962961 main.go:141] libmachine: (addons-585092)   <features>
	I0830 21:09:35.676466  962961 main.go:141] libmachine: (addons-585092)     <acpi/>
	I0830 21:09:35.676485  962961 main.go:141] libmachine: (addons-585092)     <apic/>
	I0830 21:09:35.676506  962961 main.go:141] libmachine: (addons-585092)     <pae/>
	I0830 21:09:35.676519  962961 main.go:141] libmachine: (addons-585092)     
	I0830 21:09:35.676533  962961 main.go:141] libmachine: (addons-585092)   </features>
	I0830 21:09:35.676545  962961 main.go:141] libmachine: (addons-585092)   <cpu mode='host-passthrough'>
	I0830 21:09:35.676574  962961 main.go:141] libmachine: (addons-585092)   
	I0830 21:09:35.676592  962961 main.go:141] libmachine: (addons-585092)   </cpu>
	I0830 21:09:35.676599  962961 main.go:141] libmachine: (addons-585092)   <os>
	I0830 21:09:35.676610  962961 main.go:141] libmachine: (addons-585092)     <type>hvm</type>
	I0830 21:09:35.676617  962961 main.go:141] libmachine: (addons-585092)     <boot dev='cdrom'/>
	I0830 21:09:35.676622  962961 main.go:141] libmachine: (addons-585092)     <boot dev='hd'/>
	I0830 21:09:35.676629  962961 main.go:141] libmachine: (addons-585092)     <bootmenu enable='no'/>
	I0830 21:09:35.676648  962961 main.go:141] libmachine: (addons-585092)   </os>
	I0830 21:09:35.676661  962961 main.go:141] libmachine: (addons-585092)   <devices>
	I0830 21:09:35.676673  962961 main.go:141] libmachine: (addons-585092)     <disk type='file' device='cdrom'>
	I0830 21:09:35.676683  962961 main.go:141] libmachine: (addons-585092)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/boot2docker.iso'/>
	I0830 21:09:35.676690  962961 main.go:141] libmachine: (addons-585092)       <target dev='hdc' bus='scsi'/>
	I0830 21:09:35.676697  962961 main.go:141] libmachine: (addons-585092)       <readonly/>
	I0830 21:09:35.676704  962961 main.go:141] libmachine: (addons-585092)     </disk>
	I0830 21:09:35.676711  962961 main.go:141] libmachine: (addons-585092)     <disk type='file' device='disk'>
	I0830 21:09:35.676717  962961 main.go:141] libmachine: (addons-585092)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0830 21:09:35.676729  962961 main.go:141] libmachine: (addons-585092)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/addons-585092.rawdisk'/>
	I0830 21:09:35.676737  962961 main.go:141] libmachine: (addons-585092)       <target dev='hda' bus='virtio'/>
	I0830 21:09:35.676743  962961 main.go:141] libmachine: (addons-585092)     </disk>
	I0830 21:09:35.676751  962961 main.go:141] libmachine: (addons-585092)     <interface type='network'>
	I0830 21:09:35.676757  962961 main.go:141] libmachine: (addons-585092)       <source network='mk-addons-585092'/>
	I0830 21:09:35.676765  962961 main.go:141] libmachine: (addons-585092)       <model type='virtio'/>
	I0830 21:09:35.676770  962961 main.go:141] libmachine: (addons-585092)     </interface>
	I0830 21:09:35.676778  962961 main.go:141] libmachine: (addons-585092)     <interface type='network'>
	I0830 21:09:35.676785  962961 main.go:141] libmachine: (addons-585092)       <source network='default'/>
	I0830 21:09:35.676796  962961 main.go:141] libmachine: (addons-585092)       <model type='virtio'/>
	I0830 21:09:35.676802  962961 main.go:141] libmachine: (addons-585092)     </interface>
	I0830 21:09:35.676812  962961 main.go:141] libmachine: (addons-585092)     <serial type='pty'>
	I0830 21:09:35.676819  962961 main.go:141] libmachine: (addons-585092)       <target port='0'/>
	I0830 21:09:35.676826  962961 main.go:141] libmachine: (addons-585092)     </serial>
	I0830 21:09:35.676832  962961 main.go:141] libmachine: (addons-585092)     <console type='pty'>
	I0830 21:09:35.676840  962961 main.go:141] libmachine: (addons-585092)       <target type='serial' port='0'/>
	I0830 21:09:35.676845  962961 main.go:141] libmachine: (addons-585092)     </console>
	I0830 21:09:35.676852  962961 main.go:141] libmachine: (addons-585092)     <rng model='virtio'>
	I0830 21:09:35.676859  962961 main.go:141] libmachine: (addons-585092)       <backend model='random'>/dev/random</backend>
	I0830 21:09:35.676866  962961 main.go:141] libmachine: (addons-585092)     </rng>
	I0830 21:09:35.676871  962961 main.go:141] libmachine: (addons-585092)     
	I0830 21:09:35.676882  962961 main.go:141] libmachine: (addons-585092)     
	I0830 21:09:35.676888  962961 main.go:141] libmachine: (addons-585092)   </devices>
	I0830 21:09:35.676895  962961 main.go:141] libmachine: (addons-585092) </domain>
	I0830 21:09:35.676903  962961 main.go:141] libmachine: (addons-585092) 
	I0830 21:09:35.683243  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:cb:27:5c in network default
	I0830 21:09:35.683754  962961 main.go:141] libmachine: (addons-585092) Ensuring networks are active...
	I0830 21:09:35.683791  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:35.684424  962961 main.go:141] libmachine: (addons-585092) Ensuring network default is active
	I0830 21:09:35.684744  962961 main.go:141] libmachine: (addons-585092) Ensuring network mk-addons-585092 is active
	I0830 21:09:35.685238  962961 main.go:141] libmachine: (addons-585092) Getting domain xml...
	I0830 21:09:35.685805  962961 main.go:141] libmachine: (addons-585092) Creating domain...
	I0830 21:09:37.084863  962961 main.go:141] libmachine: (addons-585092) Waiting to get IP...
	I0830 21:09:37.085604  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:37.085952  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:37.085996  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:37.085947  962983 retry.go:31] will retry after 221.474072ms: waiting for machine to come up
	I0830 21:09:37.309439  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:37.309863  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:37.309895  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:37.309798  962983 retry.go:31] will retry after 274.872195ms: waiting for machine to come up
	I0830 21:09:37.586350  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:37.586718  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:37.586752  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:37.586660  962983 retry.go:31] will retry after 391.22181ms: waiting for machine to come up
	I0830 21:09:37.979093  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:37.979496  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:37.979527  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:37.979461  962983 retry.go:31] will retry after 561.092967ms: waiting for machine to come up
	I0830 21:09:38.542118  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:38.542512  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:38.542546  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:38.542444  962983 retry.go:31] will retry after 714.358492ms: waiting for machine to come up
	I0830 21:09:39.258029  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:39.258631  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:39.258668  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:39.258587  962983 retry.go:31] will retry after 867.120623ms: waiting for machine to come up
	I0830 21:09:40.127118  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:40.127561  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:40.127599  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:40.127538  962983 retry.go:31] will retry after 1.15214536s: waiting for machine to come up
	I0830 21:09:41.281913  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:41.282467  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:41.282502  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:41.282389  962983 retry.go:31] will retry after 1.097234209s: waiting for machine to come up
	I0830 21:09:42.381660  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:42.382178  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:42.382207  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:42.382120  962983 retry.go:31] will retry after 1.567866563s: waiting for machine to come up
	I0830 21:09:43.951797  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:43.952194  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:43.952221  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:43.952141  962983 retry.go:31] will retry after 1.945898163s: waiting for machine to come up
	I0830 21:09:45.899423  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:45.899973  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:45.900012  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:45.899890  962983 retry.go:31] will retry after 2.829333084s: waiting for machine to come up
	I0830 21:09:48.732985  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:48.733339  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:48.733377  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:48.733322  962983 retry.go:31] will retry after 3.421081701s: waiting for machine to come up
	I0830 21:09:52.157343  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:52.157771  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:52.157794  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:52.157764  962983 retry.go:31] will retry after 2.877627922s: waiting for machine to come up
	I0830 21:09:55.037418  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:55.037727  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find current IP address of domain addons-585092 in network mk-addons-585092
	I0830 21:09:55.037757  962961 main.go:141] libmachine: (addons-585092) DBG | I0830 21:09:55.037702  962983 retry.go:31] will retry after 4.698357189s: waiting for machine to come up
	I0830 21:09:59.737562  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:59.738016  962961 main.go:141] libmachine: (addons-585092) Found IP for machine: 192.168.39.136
	I0830 21:09:59.738037  962961 main.go:141] libmachine: (addons-585092) Reserving static IP address...
	I0830 21:09:59.738061  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has current primary IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:59.738462  962961 main.go:141] libmachine: (addons-585092) DBG | unable to find host DHCP lease matching {name: "addons-585092", mac: "52:54:00:8b:39:95", ip: "192.168.39.136"} in network mk-addons-585092
	I0830 21:09:59.808354  962961 main.go:141] libmachine: (addons-585092) DBG | Getting to WaitForSSH function...
	I0830 21:09:59.808385  962961 main.go:141] libmachine: (addons-585092) Reserved static IP address: 192.168.39.136
	I0830 21:09:59.808399  962961 main.go:141] libmachine: (addons-585092) Waiting for SSH to be available...
	I0830 21:09:59.810862  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:59.811286  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:39:95}
	I0830 21:09:59.811321  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:59.811463  962961 main.go:141] libmachine: (addons-585092) DBG | Using SSH client type: external
	I0830 21:09:59.811495  962961 main.go:141] libmachine: (addons-585092) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa (-rw-------)
	I0830 21:09:59.811544  962961 main.go:141] libmachine: (addons-585092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 21:09:59.811563  962961 main.go:141] libmachine: (addons-585092) DBG | About to run SSH command:
	I0830 21:09:59.811575  962961 main.go:141] libmachine: (addons-585092) DBG | exit 0
	I0830 21:09:59.899983  962961 main.go:141] libmachine: (addons-585092) DBG | SSH cmd err, output: <nil>: 
	I0830 21:09:59.900292  962961 main.go:141] libmachine: (addons-585092) KVM machine creation complete!
	I0830 21:09:59.900592  962961 main.go:141] libmachine: (addons-585092) Calling .GetConfigRaw
	I0830 21:09:59.929756  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:09:59.930088  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:09:59.930326  962961 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0830 21:09:59.930346  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:09:59.931847  962961 main.go:141] libmachine: Detecting operating system of created instance...
	I0830 21:09:59.931862  962961 main.go:141] libmachine: Waiting for SSH to be available...
	I0830 21:09:59.931877  962961 main.go:141] libmachine: Getting to WaitForSSH function...
	I0830 21:09:59.931885  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:09:59.934052  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:59.934370  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:09:59.934415  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:09:59.934479  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:09:59.934655  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:09:59.934814  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:09:59.934941  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:09:59.935092  962961 main.go:141] libmachine: Using SSH client type: native
	I0830 21:09:59.935532  962961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0830 21:09:59.935544  962961 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0830 21:10:00.051095  962961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:10:00.051124  962961 main.go:141] libmachine: Detecting the provisioner...
	I0830 21:10:00.051136  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:00.053714  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.053991  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:00.054033  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.054116  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:00.054331  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:00.054513  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:00.054659  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:00.054834  962961 main.go:141] libmachine: Using SSH client type: native
	I0830 21:10:00.055486  962961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0830 21:10:00.055505  962961 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0830 21:10:00.172562  962961 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0830 21:10:00.172720  962961 main.go:141] libmachine: found compatible host: buildroot
	I0830 21:10:00.172739  962961 main.go:141] libmachine: Provisioning with buildroot...
	I0830 21:10:00.172752  962961 main.go:141] libmachine: (addons-585092) Calling .GetMachineName
	I0830 21:10:00.173056  962961 buildroot.go:166] provisioning hostname "addons-585092"
	I0830 21:10:00.173079  962961 main.go:141] libmachine: (addons-585092) Calling .GetMachineName
	I0830 21:10:00.173282  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:00.175840  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.176203  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:00.176240  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.176367  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:00.176549  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:00.176717  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:00.176851  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:00.177025  962961 main.go:141] libmachine: Using SSH client type: native
	I0830 21:10:00.177411  962961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0830 21:10:00.177425  962961 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-585092 && echo "addons-585092" | sudo tee /etc/hostname
	I0830 21:10:00.305788  962961 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-585092
	
	I0830 21:10:00.305822  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:00.308996  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.309391  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:00.309423  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.309624  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:00.309833  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:00.310031  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:00.310191  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:00.310388  962961 main.go:141] libmachine: Using SSH client type: native
	I0830 21:10:00.310795  962961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0830 21:10:00.310812  962961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-585092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-585092/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-585092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:10:00.432860  962961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:10:00.432892  962961 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 21:10:00.432915  962961 buildroot.go:174] setting up certificates
	I0830 21:10:00.432927  962961 provision.go:83] configureAuth start
	I0830 21:10:00.432940  962961 main.go:141] libmachine: (addons-585092) Calling .GetMachineName
	I0830 21:10:00.433242  962961 main.go:141] libmachine: (addons-585092) Calling .GetIP
	I0830 21:10:00.436001  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.436380  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:00.436412  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.436526  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:00.438819  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.439126  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:00.439176  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.439252  962961 provision.go:138] copyHostCerts
	I0830 21:10:00.439337  962961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 21:10:00.439528  962961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 21:10:00.439625  962961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 21:10:00.439709  962961 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.addons-585092 san=[192.168.39.136 192.168.39.136 localhost 127.0.0.1 minikube addons-585092]
	I0830 21:10:00.549880  962961 provision.go:172] copyRemoteCerts
	I0830 21:10:00.549959  962961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:10:00.549994  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:00.552747  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.553116  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:00.553152  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.553332  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:00.553521  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:00.553686  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:00.553846  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:00.640800  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 21:10:00.664758  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0830 21:10:00.688691  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 21:10:00.711148  962961 provision.go:86] duration metric: configureAuth took 278.200562ms
	I0830 21:10:00.711179  962961 buildroot.go:189] setting minikube options for container-runtime
	I0830 21:10:00.711422  962961 config.go:182] Loaded profile config "addons-585092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:10:00.711531  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:00.714003  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.714371  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:00.714406  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:00.714621  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:00.714821  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:00.714980  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:00.715117  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:00.715260  962961 main.go:141] libmachine: Using SSH client type: native
	I0830 21:10:00.715675  962961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0830 21:10:00.715694  962961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:10:01.223056  962961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:10:01.223084  962961 main.go:141] libmachine: Checking connection to Docker...
	I0830 21:10:01.223131  962961 main.go:141] libmachine: (addons-585092) Calling .GetURL
	I0830 21:10:01.224534  962961 main.go:141] libmachine: (addons-585092) DBG | Using libvirt version 6000000
	I0830 21:10:01.226774  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.227193  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:01.227227  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.227369  962961 main.go:141] libmachine: Docker is up and running!
	I0830 21:10:01.227385  962961 main.go:141] libmachine: Reticulating splines...
	I0830 21:10:01.227394  962961 client.go:171] LocalClient.Create took 26.456247375s
	I0830 21:10:01.227445  962961 start.go:167] duration metric: libmachine.API.Create for "addons-585092" took 26.456320821s
	I0830 21:10:01.227465  962961 start.go:300] post-start starting for "addons-585092" (driver="kvm2")
	I0830 21:10:01.227483  962961 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:10:01.227521  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:01.227817  962961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:10:01.227855  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:01.229955  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.230309  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:01.230340  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.230441  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:01.230601  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:01.230772  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:01.230917  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:01.316839  962961 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:10:01.321029  962961 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 21:10:01.321057  962961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 21:10:01.321174  962961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 21:10:01.321207  962961 start.go:303] post-start completed in 93.730827ms
	I0830 21:10:01.321252  962961 main.go:141] libmachine: (addons-585092) Calling .GetConfigRaw
	I0830 21:10:01.321814  962961 main.go:141] libmachine: (addons-585092) Calling .GetIP
	I0830 21:10:01.324403  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.324753  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:01.324784  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.325038  962961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/config.json ...
	I0830 21:10:01.325211  962961 start.go:128] duration metric: createHost completed in 26.572974951s
	I0830 21:10:01.325233  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:01.327484  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.327807  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:01.327846  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.327939  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:01.328110  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:01.328274  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:01.328369  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:01.328499  962961 main.go:141] libmachine: Using SSH client type: native
	I0830 21:10:01.329048  962961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0830 21:10:01.329064  962961 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 21:10:01.444532  962961 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693429801.427832503
	
	I0830 21:10:01.444563  962961 fix.go:206] guest clock: 1693429801.427832503
	I0830 21:10:01.444573  962961 fix.go:219] Guest: 2023-08-30 21:10:01.427832503 +0000 UTC Remote: 2023-08-30 21:10:01.3252228 +0000 UTC m=+26.691518386 (delta=102.609703ms)
	I0830 21:10:01.444631  962961 fix.go:190] guest clock delta is within tolerance: 102.609703ms
	I0830 21:10:01.444639  962961 start.go:83] releasing machines lock for "addons-585092", held for 26.692522681s
	I0830 21:10:01.444677  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:01.444992  962961 main.go:141] libmachine: (addons-585092) Calling .GetIP
	I0830 21:10:01.447325  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.447661  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:01.447697  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.447873  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:01.448394  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:01.448565  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:01.448650  962961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:10:01.448727  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:01.448743  962961 ssh_runner.go:195] Run: cat /version.json
	I0830 21:10:01.448765  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:01.451416  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.451443  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.451713  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:01.451762  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.451811  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:01.451833  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:01.451874  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:01.451995  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:01.452140  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:01.452155  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:01.452309  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:01.452316  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:01.452470  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:01.452473  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:01.554106  962961 ssh_runner.go:195] Run: systemctl --version
	I0830 21:10:01.559502  962961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:10:01.721397  962961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 21:10:01.727358  962961 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 21:10:01.727424  962961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:10:01.743415  962961 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 21:10:01.743449  962961 start.go:466] detecting cgroup driver to use...
	I0830 21:10:01.743531  962961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:10:01.758863  962961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:10:01.771965  962961 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:10:01.772054  962961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:10:01.785318  962961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:10:01.798908  962961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:10:01.906942  962961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:10:02.024431  962961 docker.go:212] disabling docker service ...
	I0830 21:10:02.024506  962961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:10:02.037996  962961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:10:02.050033  962961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:10:02.156654  962961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:10:02.264819  962961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:10:02.278440  962961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:10:02.295311  962961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 21:10:02.295385  962961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:10:02.304224  962961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:10:02.304285  962961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:10:02.312713  962961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:10:02.321224  962961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:10:02.329578  962961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:10:02.338476  962961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:10:02.346106  962961 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 21:10:02.346213  962961 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 21:10:02.358031  962961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:10:02.366574  962961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:10:02.471853  962961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:10:02.641725  962961 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:10:02.641870  962961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:10:02.647897  962961 start.go:534] Will wait 60s for crictl version
	I0830 21:10:02.647959  962961 ssh_runner.go:195] Run: which crictl
	I0830 21:10:02.654557  962961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:10:02.685966  962961 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 21:10:02.686095  962961 ssh_runner.go:195] Run: crio --version
	I0830 21:10:02.732175  962961 ssh_runner.go:195] Run: crio --version
	I0830 21:10:02.783413  962961 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 21:10:02.784863  962961 main.go:141] libmachine: (addons-585092) Calling .GetIP
	I0830 21:10:02.787508  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:02.787838  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:02.787870  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:02.788068  962961 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 21:10:02.792056  962961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:10:02.804311  962961 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:10:02.804364  962961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:10:02.833098  962961 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 21:10:02.833172  962961 ssh_runner.go:195] Run: which lz4
	I0830 21:10:02.836992  962961 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 21:10:02.840949  962961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 21:10:02.840982  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 21:10:04.604218  962961 crio.go:444] Took 1.767265 seconds to copy over tarball
	I0830 21:10:04.604325  962961 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 21:10:07.891277  962961 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.286921923s)
	I0830 21:10:07.891307  962961 crio.go:451] Took 3.287061 seconds to extract the tarball
	I0830 21:10:07.891318  962961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 21:10:07.933139  962961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:10:07.988604  962961 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 21:10:07.988630  962961 cache_images.go:84] Images are preloaded, skipping loading
	I0830 21:10:07.988723  962961 ssh_runner.go:195] Run: crio config
	I0830 21:10:08.053824  962961 cni.go:84] Creating CNI manager for ""
	I0830 21:10:08.053852  962961 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 21:10:08.053875  962961 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:10:08.053918  962961 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-585092 NodeName:addons-585092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 21:10:08.054084  962961 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-585092"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:10:08.054153  962961 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-585092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-585092 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 21:10:08.054228  962961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 21:10:08.064130  962961 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 21:10:08.064222  962961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 21:10:08.073651  962961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0830 21:10:08.090388  962961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 21:10:08.108146  962961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0830 21:10:08.124409  962961 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I0830 21:10:08.128052  962961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:10:08.139841  962961 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092 for IP: 192.168.39.136
	I0830 21:10:08.139890  962961 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:08.140075  962961 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 21:10:08.257105  962961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt ...
	I0830 21:10:08.257136  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt: {Name:mk72df011255e9fd0f2a9a4e871f1b4851343e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:08.257310  962961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key ...
	I0830 21:10:08.257320  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key: {Name:mk59c183c01e21b4f60829dfa7dd4cac414e9514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:08.257389  962961 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 21:10:08.578011  962961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt ...
	I0830 21:10:08.578044  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt: {Name:mke68df2d7512c2ff1a7cf756812e7dadae1e32c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:08.578215  962961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key ...
	I0830 21:10:08.578225  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key: {Name:mk55e1d3a6eb550868af7a77b7f6c8d1ef511b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:08.578350  962961 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.key
	I0830 21:10:08.578364  962961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt with IP's: []
	I0830 21:10:08.769271  962961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt ...
	I0830 21:10:08.769304  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: {Name:mk1160614e6ec59bf52e4658c1bef41b72b65ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:08.769465  962961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.key ...
	I0830 21:10:08.769484  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.key: {Name:mkeb6fee31850bc561e5b1d59ffc47a6758e1c82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:08.769545  962961 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.key.015ac7b4
	I0830 21:10:08.769566  962961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.crt.015ac7b4 with IP's: [192.168.39.136 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 21:10:09.042499  962961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.crt.015ac7b4 ...
	I0830 21:10:09.042540  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.crt.015ac7b4: {Name:mkb014893ac40ff47c5a82230b9e3f9c54ab6832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:09.042740  962961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.key.015ac7b4 ...
	I0830 21:10:09.042756  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.key.015ac7b4: {Name:mke799e5a3a8aec6cfcae54b7bcba1e691e2916e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:09.042844  962961 certs.go:337] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.crt.015ac7b4 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.crt
	I0830 21:10:09.042966  962961 certs.go:341] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.key.015ac7b4 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.key
	I0830 21:10:09.043025  962961 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/proxy-client.key
	I0830 21:10:09.043047  962961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/proxy-client.crt with IP's: []
	I0830 21:10:09.423257  962961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/proxy-client.crt ...
	I0830 21:10:09.423291  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/proxy-client.crt: {Name:mk36346418d38ca8f4bfd95eb2b12db7fb246fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:09.423484  962961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/proxy-client.key ...
	I0830 21:10:09.423499  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/proxy-client.key: {Name:mkbfca0bfe4ece011fd4b0c2fd2b5cffb3a76117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:09.423667  962961 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 21:10:09.423707  962961 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 21:10:09.423738  962961 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:10:09.423763  962961 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 21:10:09.424532  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 21:10:09.448242  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 21:10:09.469777  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 21:10:09.490959  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 21:10:09.512648  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:10:09.534035  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:10:09.555322  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:10:09.577263  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 21:10:09.599305  962961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:10:09.621125  962961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 21:10:09.636230  962961 ssh_runner.go:195] Run: openssl version
	I0830 21:10:09.641821  962961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:10:09.651077  962961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:10:09.655610  962961 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:10:09.655663  962961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:10:09.660930  962961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:10:09.669937  962961 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:10:09.673822  962961 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:10:09.673876  962961 kubeadm.go:404] StartCluster: {Name:addons-585092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:addons-585092 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:10:09.673979  962961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 21:10:09.674027  962961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:10:09.703530  962961 cri.go:89] found id: ""
	I0830 21:10:09.703642  962961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 21:10:09.713512  962961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 21:10:09.722611  962961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 21:10:09.730871  962961 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 21:10:09.730928  962961 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 21:10:09.914737  962961 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 21:10:21.094040  962961 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 21:10:21.094131  962961 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 21:10:21.094245  962961 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 21:10:21.094371  962961 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 21:10:21.094533  962961 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 21:10:21.094628  962961 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 21:10:21.096195  962961 out.go:204]   - Generating certificates and keys ...
	I0830 21:10:21.096292  962961 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 21:10:21.096387  962961 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 21:10:21.096489  962961 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 21:10:21.096576  962961 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 21:10:21.096667  962961 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 21:10:21.096734  962961 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 21:10:21.096804  962961 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 21:10:21.096967  962961 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-585092 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I0830 21:10:21.097053  962961 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 21:10:21.097213  962961 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-585092 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I0830 21:10:21.097310  962961 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 21:10:21.097399  962961 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 21:10:21.097471  962961 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 21:10:21.097556  962961 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 21:10:21.097639  962961 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 21:10:21.097711  962961 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 21:10:21.097783  962961 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 21:10:21.097848  962961 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 21:10:21.097958  962961 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 21:10:21.098056  962961 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 21:10:21.099648  962961 out.go:204]   - Booting up control plane ...
	I0830 21:10:21.099739  962961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 21:10:21.099866  962961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 21:10:21.099932  962961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 21:10:21.100024  962961 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 21:10:21.100110  962961 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 21:10:21.100143  962961 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 21:10:21.100284  962961 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 21:10:21.100384  962961 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002296 seconds
	I0830 21:10:21.100508  962961 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 21:10:21.100687  962961 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 21:10:21.100775  962961 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 21:10:21.101008  962961 kubeadm.go:322] [mark-control-plane] Marking the node addons-585092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 21:10:21.101080  962961 kubeadm.go:322] [bootstrap-token] Using token: 63njpp.1h7n9nw55gntqnpz
	I0830 21:10:21.102537  962961 out.go:204]   - Configuring RBAC rules ...
	I0830 21:10:21.102670  962961 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 21:10:21.102780  962961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 21:10:21.102933  962961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 21:10:21.103079  962961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 21:10:21.103206  962961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 21:10:21.103315  962961 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 21:10:21.103442  962961 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 21:10:21.103513  962961 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 21:10:21.103557  962961 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 21:10:21.103564  962961 kubeadm.go:322] 
	I0830 21:10:21.103612  962961 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 21:10:21.103617  962961 kubeadm.go:322] 
	I0830 21:10:21.103687  962961 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 21:10:21.103693  962961 kubeadm.go:322] 
	I0830 21:10:21.103732  962961 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 21:10:21.103832  962961 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 21:10:21.103901  962961 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 21:10:21.103911  962961 kubeadm.go:322] 
	I0830 21:10:21.103971  962961 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 21:10:21.103978  962961 kubeadm.go:322] 
	I0830 21:10:21.104051  962961 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 21:10:21.104058  962961 kubeadm.go:322] 
	I0830 21:10:21.104100  962961 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 21:10:21.104163  962961 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 21:10:21.104224  962961 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 21:10:21.104231  962961 kubeadm.go:322] 
	I0830 21:10:21.104312  962961 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 21:10:21.104407  962961 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 21:10:21.104415  962961 kubeadm.go:322] 
	I0830 21:10:21.104488  962961 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 63njpp.1h7n9nw55gntqnpz \
	I0830 21:10:21.104591  962961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 21:10:21.104628  962961 kubeadm.go:322] 	--control-plane 
	I0830 21:10:21.104669  962961 kubeadm.go:322] 
	I0830 21:10:21.104779  962961 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 21:10:21.104788  962961 kubeadm.go:322] 
	I0830 21:10:21.104853  962961 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 63njpp.1h7n9nw55gntqnpz \
	I0830 21:10:21.104975  962961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 21:10:21.104991  962961 cni.go:84] Creating CNI manager for ""
	I0830 21:10:21.105002  962961 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 21:10:21.107597  962961 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 21:10:21.109021  962961 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 21:10:21.134866  962961 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 21:10:21.205442  962961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 21:10:21.205549  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:21.205549  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=addons-585092 minikube.k8s.io/updated_at=2023_08_30T21_10_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:21.240600  962961 ops.go:34] apiserver oom_adj: -16
	I0830 21:10:21.369602  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:21.492485  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:22.083627  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:22.583644  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:23.084018  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:23.583779  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:24.083203  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:24.583366  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:25.083053  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:25.583323  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:26.083657  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:26.583672  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:27.083222  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:27.583198  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:28.083155  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:28.583086  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:29.083496  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:29.583909  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:30.083901  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:30.583898  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:31.083677  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:31.583363  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:32.083751  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:32.583049  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:33.083200  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:33.583111  962961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:10:33.713082  962961 kubeadm.go:1081] duration metric: took 12.507613598s to wait for elevateKubeSystemPrivileges.
	I0830 21:10:33.713112  962961 kubeadm.go:406] StartCluster complete in 24.039241544s
	I0830 21:10:33.713132  962961 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:33.713277  962961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:10:33.713790  962961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:10:33.714002  962961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 21:10:33.714105  962961 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0830 21:10:33.714223  962961 addons.go:69] Setting ingress=true in profile "addons-585092"
	I0830 21:10:33.714264  962961 addons.go:231] Setting addon ingress=true in "addons-585092"
	I0830 21:10:33.714240  962961 addons.go:69] Setting metrics-server=true in profile "addons-585092"
	I0830 21:10:33.714309  962961 addons.go:231] Setting addon metrics-server=true in "addons-585092"
	I0830 21:10:33.714285  962961 addons.go:69] Setting ingress-dns=true in profile "addons-585092"
	I0830 21:10:33.714365  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.714359  962961 addons.go:69] Setting default-storageclass=true in profile "addons-585092"
	I0830 21:10:33.714359  962961 addons.go:69] Setting inspektor-gadget=true in profile "addons-585092"
	I0830 21:10:33.714386  962961 addons.go:231] Setting addon ingress-dns=true in "addons-585092"
	I0830 21:10:33.714399  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.714413  962961 addons.go:231] Setting addon inspektor-gadget=true in "addons-585092"
	I0830 21:10:33.714425  962961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-585092"
	I0830 21:10:33.714452  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.714481  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.714855  962961 addons.go:69] Setting gcp-auth=true in profile "addons-585092"
	I0830 21:10:33.714880  962961 mustload.go:65] Loading cluster: addons-585092
	I0830 21:10:33.714886  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.714900  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.714917  962961 addons.go:69] Setting cloud-spanner=true in profile "addons-585092"
	I0830 21:10:33.714902  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.714936  962961 addons.go:231] Setting addon cloud-spanner=true in "addons-585092"
	I0830 21:10:33.714947  962961 addons.go:69] Setting registry=true in profile "addons-585092"
	I0830 21:10:33.714909  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.714958  962961 addons.go:231] Setting addon registry=true in "addons-585092"
	I0830 21:10:33.714984  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.714988  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.714997  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.715002  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.715096  962961 config.go:182] Loaded profile config "addons-585092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:10:33.715122  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.714205  962961 addons.go:69] Setting volumesnapshots=true in profile "addons-585092"
	I0830 21:10:33.715198  962961 addons.go:231] Setting addon volumesnapshots=true in "addons-585092"
	I0830 21:10:33.715213  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.715236  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.715242  962961 addons.go:69] Setting storage-provisioner=true in profile "addons-585092"
	I0830 21:10:33.715252  962961 addons.go:231] Setting addon storage-provisioner=true in "addons-585092"
	I0830 21:10:33.715283  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.715236  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.715330  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.715336  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.715347  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.715422  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.715464  962961 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-585092"
	I0830 21:10:33.715506  962961 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-585092"
	I0830 21:10:33.715516  962961 addons.go:69] Setting helm-tiller=true in profile "addons-585092"
	I0830 21:10:33.715516  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.715527  962961 addons.go:231] Setting addon helm-tiller=true in "addons-585092"
	I0830 21:10:33.715540  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.715551  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.715556  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.715696  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.715708  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.715718  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.715736  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.715808  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.715928  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.715953  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.715928  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.716046  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.716053  962961 config.go:182] Loaded profile config "addons-585092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:10:33.735146  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I0830 21:10:33.735169  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0830 21:10:33.735154  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33639
	I0830 21:10:33.735780  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.735793  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.735965  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.736365  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.736394  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.736444  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.736449  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.736465  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.736467  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.736768  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.736830  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.736835  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.737402  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.737412  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.737430  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.737408  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.737466  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.737478  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.737525  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
	I0830 21:10:33.738071  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.738657  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.738698  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.739077  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.739328  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.756828  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38417
	I0830 21:10:33.757006  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0830 21:10:33.757081  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0830 21:10:33.757950  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0830 21:10:33.758159  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.758750  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.758763  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.758773  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.759174  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.759354  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.759373  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.760069  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.760086  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.760127  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.760675  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.760846  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.760866  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.760926  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.761464  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.761499  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.761615  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.761726  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.761737  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.761837  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.762097  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.762728  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.762914  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.763589  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.764141  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.764179  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.766983  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42023
	I0830 21:10:33.767433  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.767938  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.767953  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.768324  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.768895  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.768936  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.771040  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40595
	I0830 21:10:33.771429  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.771890  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.771907  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.772285  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.772481  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.774053  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.776260  962961 out.go:177]   - Using image docker.io/registry:2.8.1
	I0830 21:10:33.777838  962961 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0830 21:10:33.779283  962961 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0830 21:10:33.779300  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0830 21:10:33.779323  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.780175  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0830 21:10:33.780772  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.781242  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.781260  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.781667  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.781837  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.783325  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.783458  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.785229  962961 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0830 21:10:33.784073  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.784223  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.787136  962961 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0830 21:10:33.787158  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0830 21:10:33.787180  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.787142  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.787753  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.787944  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.788059  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
	I0830 21:10:33.788422  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:33.789139  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.789736  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.789761  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.789879  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0830 21:10:33.790160  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.790318  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.791062  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.791181  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.791607  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.791999  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.792157  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.793971  962961 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0830 21:10:33.792466  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.792487  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.793208  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.794756  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0830 21:10:33.795475  962961 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 21:10:33.795488  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 21:10:33.795507  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.795575  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.795596  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.796287  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.796692  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.796762  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:33.797350  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I0830 21:10:33.797484  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43849
	I0830 21:10:33.798007  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.798585  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.798610  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.798684  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.799165  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.799181  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.799562  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.799592  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.799799  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.800032  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.800076  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.800611  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.800650  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.800689  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.801373  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.801416  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.801523  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.801889  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.803764  962961 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0830 21:10:33.802065  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.802592  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.805277  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.807066  962961 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0830 21:10:33.805535  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.805679  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.806247  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0830 21:10:33.810229  962961 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0830 21:10:33.808777  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:33.808852  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.809134  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.811030  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36413
	I0830 21:10:33.811306  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0830 21:10:33.812278  962961 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0830 21:10:33.812291  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0830 21:10:33.812309  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.813030  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.813660  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.813690  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.813712  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.813765  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.813927  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I0830 21:10:33.814278  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.814294  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.814328  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.814345  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.814516  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.814880  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.814955  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.815000  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.815174  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.815178  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.815746  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.815933  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.815973  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.816297  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.816393  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.817127  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.817145  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.817146  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.817211  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.817302  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.819137  962961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:10:33.817498  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.817539  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.817678  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:33.817717  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.821070  962961 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:10:33.821085  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 21:10:33.821105  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.823310  962961 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0830 21:10:33.821824  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.824602  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.825075  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.826348  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0830 21:10:33.826500  962961 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0830 21:10:33.827888  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.827921  962961 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0830 21:10:33.827941  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0830 21:10:33.827968  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.826466  962961 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0830 21:10:33.828021  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0830 21:10:33.828032  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.826590  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.828077  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.827847  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.828244  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.828341  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:33.829053  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43835
	I0830 21:10:33.830243  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.830262  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.830338  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.830889  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.830907  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.830967  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.831170  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.831227  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.831270  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.831806  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.831843  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.831860  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.831893  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.831953  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.832194  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.832357  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.832423  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.832451  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.832511  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:33.833228  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.833338  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.833402  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.833444  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.833665  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.835460  962961 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0830 21:10:33.833923  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:33.838533  962961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0830 21:10:33.836989  962961 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0830 21:10:33.837391  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0830 21:10:33.839193  962961 addons.go:231] Setting addon default-storageclass=true in "addons-585092"
	I0830 21:10:33.841091  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0830 21:10:33.841111  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.841093  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:33.842548  962961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0830 21:10:33.841473  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.841533  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.843471  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.844081  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.844106  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.845807  962961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0830 21:10:33.844235  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.843958  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.844709  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.847185  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.848787  962961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0830 21:10:33.847448  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.847537  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.848957  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.850405  962961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0830 21:10:33.850586  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:33.850666  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.851804  962961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0830 21:10:33.853275  962961 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0830 21:10:33.853459  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.854697  962961 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0830 21:10:33.856270  962961 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0830 21:10:33.856287  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0830 21:10:33.857783  962961 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0830 21:10:33.856307  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.859564  962961 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0830 21:10:33.859582  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0830 21:10:33.859601  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.862831  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.863253  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.863267  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.863279  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.863476  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.863640  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.863685  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41159
	I0830 21:10:33.863701  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.863726  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.863867  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.864035  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.864059  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:33.864078  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.864193  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.864321  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.864425  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:33.864880  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.864907  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.865298  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.865739  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:33.865772  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:33.876873  962961 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-585092" context rescaled to 1 replicas
	I0830 21:10:33.876918  962961 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:10:33.879349  962961 out.go:177] * Verifying Kubernetes components...
	I0830 21:10:33.881094  962961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:10:33.880938  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42625
	I0830 21:10:33.881596  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:33.882246  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:33.882260  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:33.882664  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:33.882849  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:33.884505  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:33.884757  962961 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 21:10:33.884771  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 21:10:33.884785  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:33.887455  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.887831  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:33.887864  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:33.887994  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:33.888222  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:33.888368  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:33.888493  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:34.025567  962961 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0830 21:10:34.025590  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0830 21:10:34.082846  962961 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0830 21:10:34.082882  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0830 21:10:34.087400  962961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 21:10:34.087421  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0830 21:10:34.101762  962961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 21:10:34.102783  962961 node_ready.go:35] waiting up to 6m0s for node "addons-585092" to be "Ready" ...
	I0830 21:10:34.118540  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:10:34.119931  962961 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0830 21:10:34.119952  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0830 21:10:34.131308  962961 node_ready.go:49] node "addons-585092" has status "Ready":"True"
	I0830 21:10:34.131334  962961 node_ready.go:38] duration metric: took 28.521622ms waiting for node "addons-585092" to be "Ready" ...
	I0830 21:10:34.131343  962961 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:10:34.183266  962961 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7nd85" in "kube-system" namespace to be "Ready" ...
	I0830 21:10:34.188147  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0830 21:10:34.213033  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0830 21:10:34.219083  962961 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0830 21:10:34.219114  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0830 21:10:34.225465  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0830 21:10:34.227867  962961 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0830 21:10:34.227887  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0830 21:10:34.230214  962961 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0830 21:10:34.230230  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0830 21:10:34.232950  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 21:10:34.238825  962961 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0830 21:10:34.238850  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0830 21:10:34.258075  962961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 21:10:34.258102  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 21:10:34.280521  962961 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0830 21:10:34.280550  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0830 21:10:34.591892  962961 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0830 21:10:34.591915  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0830 21:10:34.598112  962961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 21:10:34.598133  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 21:10:34.634534  962961 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0830 21:10:34.634571  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0830 21:10:34.642457  962961 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0830 21:10:34.642483  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0830 21:10:34.648620  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0830 21:10:34.650034  962961 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0830 21:10:34.650052  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0830 21:10:34.666375  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 21:10:34.781901  962961 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0830 21:10:34.781934  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0830 21:10:34.790535  962961 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0830 21:10:34.790561  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0830 21:10:34.817969  962961 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0830 21:10:34.817994  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0830 21:10:34.819624  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0830 21:10:34.870832  962961 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0830 21:10:34.870856  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0830 21:10:34.896725  962961 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0830 21:10:34.896746  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0830 21:10:34.897261  962961 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0830 21:10:34.897282  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0830 21:10:34.946691  962961 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0830 21:10:34.946722  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0830 21:10:34.959971  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0830 21:10:34.984953  962961 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0830 21:10:34.984984  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0830 21:10:35.042641  962961 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0830 21:10:35.042671  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0830 21:10:35.076735  962961 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0830 21:10:35.076767  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0830 21:10:35.103623  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0830 21:10:35.136637  962961 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0830 21:10:35.136673  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0830 21:10:35.193346  962961 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0830 21:10:35.193382  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0830 21:10:35.226863  962961 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0830 21:10:35.226894  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0830 21:10:35.262365  962961 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0830 21:10:35.262390  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0830 21:10:35.291955  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0830 21:10:37.039097  962961 pod_ready.go:102] pod "coredns-5dd5756b68-7nd85" in "kube-system" namespace has status "Ready":"False"
	I0830 21:10:37.691718  962961 pod_ready.go:97] pod "coredns-5dd5756b68-7nd85" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 21:10:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 21:10:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 21:10:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 21:10:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.136 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-30 21:10:34 +0000 UTC InitContainerStatuses:[] ContainerS
tatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID: ContainerID: Started:0xc0024caa3a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0830 21:10:37.691788  962961 pod_ready.go:81] duration metric: took 3.508489416s waiting for pod "coredns-5dd5756b68-7nd85" in "kube-system" namespace to be "Ready" ...
	E0830 21:10:37.691802  962961 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-7nd85" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 21:10:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 21:10:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 21:10:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 21:10:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.136 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-30 21:10:34 +0000 UTC InitCo
ntainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID: ContainerID: Started:0xc0024caa3a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0830 21:10:37.691813  962961 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace to be "Ready" ...
	I0830 21:10:38.671405  962961 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.569601556s)
	I0830 21:10:38.671452  962961 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0830 21:10:40.093873  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:10:40.502805  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.384225091s)
	I0830 21:10:40.502871  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:40.502885  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:40.503223  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:40.503249  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:40.503261  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:40.503271  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:40.503525  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:40.503543  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:40.525311  962961 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0830 21:10:40.525349  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:40.528465  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:40.528867  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:40.528897  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:40.529068  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:40.529263  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:40.529396  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:40.529499  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:40.770782  962961 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0830 21:10:40.827724  962961 addons.go:231] Setting addon gcp-auth=true in "addons-585092"
	I0830 21:10:40.827792  962961 host.go:66] Checking if "addons-585092" exists ...
	I0830 21:10:40.828118  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:40.828155  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:40.843280  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33753
	I0830 21:10:40.843683  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:40.844273  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:40.844305  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:40.844638  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:40.845087  962961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:10:40.845112  962961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:10:40.860302  962961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33395
	I0830 21:10:40.860713  962961 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:10:40.861302  962961 main.go:141] libmachine: Using API Version  1
	I0830 21:10:40.861325  962961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:10:40.861663  962961 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:10:40.861801  962961 main.go:141] libmachine: (addons-585092) Calling .GetState
	I0830 21:10:40.863402  962961 main.go:141] libmachine: (addons-585092) Calling .DriverName
	I0830 21:10:40.863629  962961 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0830 21:10:40.863662  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHHostname
	I0830 21:10:40.866273  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:40.866697  962961 main.go:141] libmachine: (addons-585092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:39:95", ip: ""} in network mk-addons-585092: {Iface:virbr1 ExpiryTime:2023-08-30 22:09:51 +0000 UTC Type:0 Mac:52:54:00:8b:39:95 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-585092 Clientid:01:52:54:00:8b:39:95}
	I0830 21:10:40.866734  962961 main.go:141] libmachine: (addons-585092) DBG | domain addons-585092 has defined IP address 192.168.39.136 and MAC address 52:54:00:8b:39:95 in network mk-addons-585092
	I0830 21:10:40.866899  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHPort
	I0830 21:10:40.867092  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHKeyPath
	I0830 21:10:40.867241  962961 main.go:141] libmachine: (addons-585092) Calling .GetSSHUsername
	I0830 21:10:40.867387  962961 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/addons-585092/id_rsa Username:docker}
	I0830 21:10:41.934713  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.746521475s)
	I0830 21:10:41.934758  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.934770  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.934836  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.721762819s)
	I0830 21:10:41.934861  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.709369223s)
	I0830 21:10:41.934879  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.934882  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.934892  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.934896  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.934914  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.701939728s)
	I0830 21:10:41.934937  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.934947  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.934965  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.286320952s)
	I0830 21:10:41.935011  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.935020  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.935039  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.268634667s)
	I0830 21:10:41.935061  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.935070  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.935182  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.115529864s)
	I0830 21:10:41.935204  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.935213  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.935350  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.975332136s)
	W0830 21:10:41.935377  962961 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0830 21:10:41.935399  962961 retry.go:31] will retry after 141.588055ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0830 21:10:41.935474  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.831821197s)
	I0830 21:10:41.935491  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.935499  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.937442  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.937451  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.937464  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.937474  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.937480  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.937483  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.937489  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.937499  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.937509  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.937530  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.937537  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.937555  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.937584  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.937597  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.937606  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.937614  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.937625  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.937641  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.937652  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.937655  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.937661  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.937674  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.937682  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.937691  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.937699  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.937723  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.937731  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.937740  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.937748  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.937795  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.937825  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.937833  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.937843  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.937850  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.938079  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.937585  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.938113  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.938121  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.938124  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.938129  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.938133  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.938209  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.938218  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.938232  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.938246  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.938252  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.938255  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.938259  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.938265  962961 addons.go:467] Verifying addon registry=true in "addons-585092"
	I0830 21:10:41.941601  962961 out.go:177] * Verifying registry addon...
	I0830 21:10:41.938476  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.938494  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.938526  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.938537  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.938540  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.938267  962961 addons.go:467] Verifying addon ingress=true in "addons-585092"
	I0830 21:10:41.940819  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.940848  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.940856  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.940859  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.941672  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.941685  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.941695  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.941703  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.941705  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.943240  962961 addons.go:467] Verifying addon metrics-server=true in "addons-585092"
	I0830 21:10:41.944543  962961 out.go:177] * Verifying ingress addon...
	I0830 21:10:41.943226  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:41.943908  962961 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0830 21:10:41.945893  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:41.946197  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:41.946223  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:41.946223  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:41.946741  962961 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0830 21:10:41.954141  962961 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0830 21:10:41.954157  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:41.962463  962961 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0830 21:10:41.962482  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:41.974578  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:41.977807  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:42.077720  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0830 21:10:42.210507  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:10:42.494827  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:42.619497  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:42.928160  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.636145199s)
	I0830 21:10:42.928253  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:42.928287  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:42.928263  962961 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.064609071s)
	I0830 21:10:42.930067  962961 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0830 21:10:42.928612  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:42.928611  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:42.931471  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:42.932835  962961 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0830 21:10:42.931512  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:42.934332  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:42.934400  962961 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0830 21:10:42.934427  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0830 21:10:42.934648  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:42.934703  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:42.934720  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:42.934740  962961 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-585092"
	I0830 21:10:42.936371  962961 out.go:177] * Verifying csi-hostpath-driver addon...
	I0830 21:10:42.938330  962961 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0830 21:10:43.014602  962961 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0830 21:10:43.014630  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:43.024952  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:43.027520  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:43.053809  962961 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0830 21:10:43.053838  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:43.060144  962961 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0830 21:10:43.060164  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0830 21:10:43.125806  962961 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0830 21:10:43.125831  962961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0830 21:10:43.169851  962961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0830 21:10:43.513511  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:43.514171  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:43.611248  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:44.012405  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:44.012454  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:44.109984  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:44.483883  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:44.493180  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:44.572604  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:44.703255  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:10:44.984333  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:45.000104  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:45.027428  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.949658998s)
	I0830 21:10:45.027491  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:45.027505  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:45.027867  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:45.027891  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:45.027911  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:45.027924  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:45.028210  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:45.028234  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:45.069296  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:45.454261  962961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.284359379s)
	I0830 21:10:45.454315  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:45.454326  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:45.454617  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:45.454681  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:45.454701  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:45.454711  962961 main.go:141] libmachine: Making call to close driver server
	I0830 21:10:45.454730  962961 main.go:141] libmachine: (addons-585092) Calling .Close
	I0830 21:10:45.454967  962961 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:10:45.454988  962961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:10:45.455029  962961 main.go:141] libmachine: (addons-585092) DBG | Closing plugin on server side
	I0830 21:10:45.456279  962961 addons.go:467] Verifying addon gcp-auth=true in "addons-585092"
	I0830 21:10:45.458114  962961 out.go:177] * Verifying gcp-auth addon...
	I0830 21:10:45.460579  962961 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0830 21:10:45.476792  962961 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0830 21:10:45.476821  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:45.494291  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:45.496073  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:45.496253  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:45.564746  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:45.979121  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:45.983894  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:46.001484  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:46.062310  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:46.481116  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:46.487727  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:46.501441  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:46.574238  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:46.704510  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:10:46.982172  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:46.984159  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:46.998747  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:47.060289  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:47.481976  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:47.487255  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:47.497993  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:47.560260  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:47.979598  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:47.982262  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:47.998563  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:48.061685  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:48.479392  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:48.484153  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:48.497613  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:48.559358  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:48.980359  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:48.987199  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:48.997637  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:49.060073  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:49.190054  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:10:49.493505  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:49.496612  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:49.511400  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:49.566976  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:49.994876  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:49.996247  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:49.999378  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:50.059895  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:50.479567  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:50.495505  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:50.499064  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:50.561184  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:50.980965  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:50.983306  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:50.998550  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:51.063048  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:51.199745  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:10:51.481189  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:51.484230  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:51.505029  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:51.565378  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:51.981315  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:51.989918  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:51.999576  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:52.077446  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:52.487891  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:52.490032  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:52.520850  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:52.565758  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:52.980051  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:52.983573  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:52.998298  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:53.062647  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:53.480230  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:53.483294  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:53.498808  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:53.564291  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:53.687328  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:10:53.984588  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:53.984664  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:54.005444  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:54.059481  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:54.479235  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:54.482406  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:54.498644  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:54.559564  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:54.980093  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:54.983068  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:54.998771  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:55.061425  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:55.715171  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:55.717848  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:55.718038  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:55.720756  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:55.721325  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:10:55.979224  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:55.981978  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:55.998454  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:56.059855  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:56.484513  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:56.486722  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:56.497741  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:56.560921  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:56.982183  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:56.988818  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:57.001387  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:57.059489  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:57.479098  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:57.482162  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:57.498136  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:57.559380  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:57.980045  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:57.983124  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:57.998115  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:58.063928  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:58.205154  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:10:58.479766  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:58.483494  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:58.498761  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:58.560568  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:58.981563  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:58.985462  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:58.998965  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:59.060037  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:59.483475  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:59.483517  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:59.498522  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:10:59.559912  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:10:59.980454  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:10:59.986332  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:10:59.998423  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:00.059806  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:00.479493  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:00.482200  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:00.498153  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:00.560244  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:00.684208  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:11:00.982580  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:00.991289  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:00.999122  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:01.060779  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:01.479599  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:01.489978  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:01.501842  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:01.561374  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:01.988760  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:01.990548  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:02.009914  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:02.060576  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:02.482621  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:02.490184  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:02.506329  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:02.561091  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:02.981745  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:02.986234  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:03.003305  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:03.071507  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:03.184335  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:11:03.481212  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:03.488345  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:03.500113  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:03.559919  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:03.981229  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:03.982965  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:03.998696  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:04.059795  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:04.480259  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:04.483707  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:04.499032  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:04.560017  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:04.980211  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:04.982838  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:04.999142  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:05.062378  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:05.480631  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:05.483171  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:05.498153  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:05.561354  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:05.683400  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:11:05.983122  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:05.985785  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:06.000069  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:06.068029  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:06.479843  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:06.483139  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:06.498386  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:06.559690  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:06.979879  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:06.982864  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:06.997995  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:07.060684  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:07.480578  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:07.484654  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:07.498518  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:07.560425  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:07.684550  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:11:07.979316  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:07.983433  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:07.999629  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:08.059751  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:08.488903  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:08.496433  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:08.501207  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:08.560105  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:08.981136  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:08.984738  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:08.999607  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:09.060382  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:09.486130  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:09.486186  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:09.499956  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:09.563748  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:09.874051  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:11:09.987207  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:09.994263  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:09.998330  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:10.059533  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:10.479876  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:10.483183  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:10.498449  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:10.560498  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:10.981781  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:10.984943  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:10.999380  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:11.060118  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:11.486948  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:11.488009  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:11.512383  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:11.559144  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:11.979591  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:11.983266  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:12.296917  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:12.297515  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:12.301536  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:11:12.480294  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:12.494572  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:12.501938  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:12.560415  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:12.980296  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:12.983330  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:12.998452  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:13.059440  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:13.479895  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:13.482614  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:13.498861  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:13.561389  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:13.983688  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:13.987142  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:13.998017  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:14.063992  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:14.482291  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:14.484766  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:14.499130  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:14.559595  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:14.683235  962961 pod_ready.go:102] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:11:14.984961  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:14.985844  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:14.997984  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:15.059719  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:15.479860  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:15.483694  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:15.498366  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:15.561483  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:16.141528  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:16.160368  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:16.160557  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:16.161017  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:16.240685  962961 pod_ready.go:92] pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace has status "Ready":"True"
	I0830 21:11:16.240710  962961 pod_ready.go:81] duration metric: took 38.548887689s waiting for pod "coredns-5dd5756b68-zdrqg" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.240723  962961 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-585092" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.269651  962961 pod_ready.go:92] pod "etcd-addons-585092" in "kube-system" namespace has status "Ready":"True"
	I0830 21:11:16.269683  962961 pod_ready.go:81] duration metric: took 28.952521ms waiting for pod "etcd-addons-585092" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.269697  962961 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-585092" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.290437  962961 pod_ready.go:92] pod "kube-apiserver-addons-585092" in "kube-system" namespace has status "Ready":"True"
	I0830 21:11:16.290464  962961 pod_ready.go:81] duration metric: took 20.759312ms waiting for pod "kube-apiserver-addons-585092" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.290480  962961 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-585092" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.343093  962961 pod_ready.go:92] pod "kube-controller-manager-addons-585092" in "kube-system" namespace has status "Ready":"True"
	I0830 21:11:16.343117  962961 pod_ready.go:81] duration metric: took 52.628724ms waiting for pod "kube-controller-manager-addons-585092" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.343132  962961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xlgz2" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.385545  962961 pod_ready.go:92] pod "kube-proxy-xlgz2" in "kube-system" namespace has status "Ready":"True"
	I0830 21:11:16.385622  962961 pod_ready.go:81] duration metric: took 42.482347ms waiting for pod "kube-proxy-xlgz2" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.385645  962961 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-585092" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.497211  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:16.497633  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:16.504691  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:16.560198  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:16.583237  962961 pod_ready.go:92] pod "kube-scheduler-addons-585092" in "kube-system" namespace has status "Ready":"True"
	I0830 21:11:16.583268  962961 pod_ready.go:81] duration metric: took 197.605698ms waiting for pod "kube-scheduler-addons-585092" in "kube-system" namespace to be "Ready" ...
	I0830 21:11:16.583279  962961 pod_ready.go:38] duration metric: took 42.451922359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:11:16.583301  962961 api_server.go:52] waiting for apiserver process to appear ...
	I0830 21:11:16.583369  962961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:11:16.694852  962961 api_server.go:72] duration metric: took 42.817898019s to wait for apiserver process to appear ...
	I0830 21:11:16.694879  962961 api_server.go:88] waiting for apiserver healthz status ...
	I0830 21:11:16.694899  962961 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I0830 21:11:16.704796  962961 api_server.go:279] https://192.168.39.136:8443/healthz returned 200:
	ok
	I0830 21:11:16.706166  962961 api_server.go:141] control plane version: v1.28.1
	I0830 21:11:16.706194  962961 api_server.go:131] duration metric: took 11.307914ms to wait for apiserver health ...
	I0830 21:11:16.706204  962961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 21:11:16.794729  962961 system_pods.go:59] 17 kube-system pods found
	I0830 21:11:16.794770  962961 system_pods.go:61] "coredns-5dd5756b68-zdrqg" [03b1068d-5f1f-408b-ba04-2725fe9ddb6d] Running
	I0830 21:11:16.794781  962961 system_pods.go:61] "csi-hostpath-attacher-0" [65650e0a-0ce2-4872-be9b-0de35e46d805] Running
	I0830 21:11:16.794788  962961 system_pods.go:61] "csi-hostpath-resizer-0" [96fc2005-b320-4cc8-80de-23416b4a92b1] Running
	I0830 21:11:16.794800  962961 system_pods.go:61] "csi-hostpathplugin-59hp8" [2f381769-da39-41be-8683-6112f526b5ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0830 21:11:16.794812  962961 system_pods.go:61] "etcd-addons-585092" [e0912011-8a15-44e6-bb8d-d5c7cc335fdb] Running
	I0830 21:11:16.794818  962961 system_pods.go:61] "kube-apiserver-addons-585092" [4058fb80-42df-4342-91ce-31f45649e6ed] Running
	I0830 21:11:16.794823  962961 system_pods.go:61] "kube-controller-manager-addons-585092" [7797481b-eef1-4d38-a753-695aa4d1e8a4] Running
	I0830 21:11:16.794827  962961 system_pods.go:61] "kube-ingress-dns-minikube" [8af39e86-88ce-447e-b865-64008666e1f3] Running
	I0830 21:11:16.794832  962961 system_pods.go:61] "kube-proxy-xlgz2" [868e2cef-bbdd-4814-bbbe-914e956a921c] Running
	I0830 21:11:16.794836  962961 system_pods.go:61] "kube-scheduler-addons-585092" [c91c3246-adc5-4c1b-aae6-5180dd90116c] Running
	I0830 21:11:16.794841  962961 system_pods.go:61] "metrics-server-7c66d45ddc-pflsn" [0c4abf13-24b6-428f-9a0d-20153eaee786] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 21:11:16.794848  962961 system_pods.go:61] "registry-kqslj" [2d5d3cd0-8bb5-4b94-b187-679fcd34e3a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0830 21:11:16.794855  962961 system_pods.go:61] "registry-proxy-5t4kg" [c0624397-887d-4175-a301-884300862c9a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0830 21:11:16.794869  962961 system_pods.go:61] "snapshot-controller-58dbcc7b99-fl8qw" [245c5ac6-43a8-40ae-8998-705e154f4c88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0830 21:11:16.794877  962961 system_pods.go:61] "snapshot-controller-58dbcc7b99-tlkgb" [8280f20e-1850-45c9-87bc-5bf134942853] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0830 21:11:16.794884  962961 system_pods.go:61] "storage-provisioner" [aa83eb80-2a41-41f4-a813-be8ea50426b0] Running
	I0830 21:11:16.794892  962961 system_pods.go:61] "tiller-deploy-7b677967b9-2qb8z" [62fd46fc-44ab-42a7-92d2-e780114685b9] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0830 21:11:16.794900  962961 system_pods.go:74] duration metric: took 88.689087ms to wait for pod list to return data ...
	I0830 21:11:16.794911  962961 default_sa.go:34] waiting for default service account to be created ...
	I0830 21:11:16.979323  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:16.982884  962961 default_sa.go:45] found service account: "default"
	I0830 21:11:16.982903  962961 default_sa.go:55] duration metric: took 187.984772ms for default service account to be created ...
	I0830 21:11:16.982913  962961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 21:11:16.985332  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:17.000629  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:17.062308  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:17.187744  962961 system_pods.go:86] 17 kube-system pods found
	I0830 21:11:17.187787  962961 system_pods.go:89] "coredns-5dd5756b68-zdrqg" [03b1068d-5f1f-408b-ba04-2725fe9ddb6d] Running
	I0830 21:11:17.187794  962961 system_pods.go:89] "csi-hostpath-attacher-0" [65650e0a-0ce2-4872-be9b-0de35e46d805] Running
	I0830 21:11:17.187798  962961 system_pods.go:89] "csi-hostpath-resizer-0" [96fc2005-b320-4cc8-80de-23416b4a92b1] Running
	I0830 21:11:17.187807  962961 system_pods.go:89] "csi-hostpathplugin-59hp8" [2f381769-da39-41be-8683-6112f526b5ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0830 21:11:17.187813  962961 system_pods.go:89] "etcd-addons-585092" [e0912011-8a15-44e6-bb8d-d5c7cc335fdb] Running
	I0830 21:11:17.187818  962961 system_pods.go:89] "kube-apiserver-addons-585092" [4058fb80-42df-4342-91ce-31f45649e6ed] Running
	I0830 21:11:17.187822  962961 system_pods.go:89] "kube-controller-manager-addons-585092" [7797481b-eef1-4d38-a753-695aa4d1e8a4] Running
	I0830 21:11:17.187831  962961 system_pods.go:89] "kube-ingress-dns-minikube" [8af39e86-88ce-447e-b865-64008666e1f3] Running
	I0830 21:11:17.187835  962961 system_pods.go:89] "kube-proxy-xlgz2" [868e2cef-bbdd-4814-bbbe-914e956a921c] Running
	I0830 21:11:17.187841  962961 system_pods.go:89] "kube-scheduler-addons-585092" [c91c3246-adc5-4c1b-aae6-5180dd90116c] Running
	I0830 21:11:17.187847  962961 system_pods.go:89] "metrics-server-7c66d45ddc-pflsn" [0c4abf13-24b6-428f-9a0d-20153eaee786] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 21:11:17.187856  962961 system_pods.go:89] "registry-kqslj" [2d5d3cd0-8bb5-4b94-b187-679fcd34e3a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0830 21:11:17.187863  962961 system_pods.go:89] "registry-proxy-5t4kg" [c0624397-887d-4175-a301-884300862c9a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0830 21:11:17.187873  962961 system_pods.go:89] "snapshot-controller-58dbcc7b99-fl8qw" [245c5ac6-43a8-40ae-8998-705e154f4c88] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0830 21:11:17.187881  962961 system_pods.go:89] "snapshot-controller-58dbcc7b99-tlkgb" [8280f20e-1850-45c9-87bc-5bf134942853] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0830 21:11:17.187886  962961 system_pods.go:89] "storage-provisioner" [aa83eb80-2a41-41f4-a813-be8ea50426b0] Running
	I0830 21:11:17.187892  962961 system_pods.go:89] "tiller-deploy-7b677967b9-2qb8z" [62fd46fc-44ab-42a7-92d2-e780114685b9] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0830 21:11:17.187898  962961 system_pods.go:126] duration metric: took 204.98049ms to wait for k8s-apps to be running ...
	I0830 21:11:17.187906  962961 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 21:11:17.187949  962961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:11:17.212374  962961 system_svc.go:56] duration metric: took 24.456316ms WaitForService to wait for kubelet.
	I0830 21:11:17.212415  962961 kubeadm.go:581] duration metric: took 43.33546536s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 21:11:17.212443  962961 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:11:17.381205  962961 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:11:17.381250  962961 node_conditions.go:123] node cpu capacity is 2
	I0830 21:11:17.381263  962961 node_conditions.go:105] duration metric: took 168.814294ms to run NodePressure ...
	I0830 21:11:17.381274  962961 start.go:228] waiting for startup goroutines ...
	I0830 21:11:17.484574  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:17.486516  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:17.504970  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:17.562862  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:17.993799  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:18.010324  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:18.020096  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:18.072354  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:18.501927  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:18.532345  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:18.532989  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:18.563466  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:19.015660  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:19.021327  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:19.030700  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:19.100155  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:19.485308  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:19.505259  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:19.506807  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:19.567196  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:19.982069  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:19.986368  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:19.998642  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:20.061948  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:20.799866  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:20.803654  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:20.804065  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:20.804319  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:21.009409  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:21.009816  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:21.014065  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:21.064365  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:21.479453  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:21.482456  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:21.498919  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:21.565926  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:21.996594  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:21.996674  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:22.001275  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:22.062785  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:22.480536  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:22.483957  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:22.498093  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:22.560149  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:22.979056  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:22.983354  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:23.002415  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:23.060648  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:23.479764  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:23.482800  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:23.497316  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:23.560194  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:23.988489  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:23.988529  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:24.003019  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:24.059791  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:24.480212  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:24.484094  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:24.498111  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:24.561198  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:24.979813  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:24.982991  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:24.998127  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:25.060263  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:25.483815  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:25.486258  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:25.498129  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:25.561280  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:25.979376  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:25.982255  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:25.998115  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:26.059163  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:26.486976  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:26.487125  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:26.505293  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:26.560048  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:26.987589  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:26.989427  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:27.002719  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:27.060307  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:27.479215  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:27.482646  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:27.498025  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:27.560420  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:27.979425  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:27.982072  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:27.997608  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:28.060522  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:28.481636  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:28.487025  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:28.497743  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:28.563760  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:28.979573  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:28.982428  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:28.998626  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:29.059477  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:29.480502  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:29.482716  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:29.498626  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:29.560029  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:29.979722  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:29.983742  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:29.997968  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:30.066302  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:30.481989  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:30.485493  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:30.500423  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:30.561647  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:30.987061  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:30.989402  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:30.998669  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:31.065631  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:31.481371  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:31.484320  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:31.499040  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:31.568208  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:31.986240  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:31.987073  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:31.997720  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:32.061343  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:32.479637  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:32.484043  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:32.498154  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:32.566990  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:32.979531  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:32.987061  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:32.998024  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:33.061092  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:33.508954  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:33.512146  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:33.534885  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:33.589959  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:33.982812  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:33.986421  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:33.999425  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:34.062089  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:34.485663  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:34.495640  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:34.500089  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:34.561190  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:34.983613  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:11:34.984276  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:34.998130  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:35.059157  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:35.487455  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:35.488752  962961 kapi.go:107] duration metric: took 53.544839407s to wait for kubernetes.io/minikube-addons=registry ...
	I0830 21:11:35.498821  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:35.559854  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:35.981070  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:35.998902  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:36.062107  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:36.480241  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:36.498324  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:36.559535  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:36.984406  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:37.004795  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:37.065357  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:37.499797  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:37.502199  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:37.559350  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:37.982033  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:37.999668  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:38.059467  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:38.479802  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:38.500727  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:38.559894  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:38.980237  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:39.004877  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:39.060667  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:39.497144  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:39.501018  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:39.561597  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:39.979303  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:39.998245  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:40.060264  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:40.481403  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:40.498506  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:40.560992  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:40.979789  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:41.000108  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:41.060426  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:41.481499  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:41.498516  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:41.567840  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:41.979927  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:41.998857  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:42.062080  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:42.479746  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:42.498247  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:42.563546  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:42.979604  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:42.998522  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:43.060499  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:43.480063  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:43.498193  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:43.558521  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:11:43.980126  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:43.998908  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:44.060825  962961 kapi.go:107] duration metric: took 1m1.122490218s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0830 21:11:44.480036  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:44.499155  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:44.980892  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:45.000081  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:45.479986  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:45.499027  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:45.980566  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:45.998023  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:46.480080  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:46.498486  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:46.979761  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:46.999705  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:47.483192  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:47.498540  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:47.982884  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:47.998289  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:48.484034  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:48.498905  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:48.985390  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:49.000743  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:49.480200  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:49.498858  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:49.979904  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:49.998412  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:50.481860  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:50.499116  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:50.982320  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:50.998437  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:51.479384  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:51.498127  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:51.982174  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:51.997583  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:52.481080  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:52.498028  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:53.082424  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:53.082690  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:53.480786  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:53.498908  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:53.980685  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:53.998276  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:54.480118  962961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:11:54.497782  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:54.979080  962961 kapi.go:107] duration metric: took 1m13.032338528s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0830 21:11:54.997941  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:55.499842  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:56.000728  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:56.500202  962961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:11:56.998254  962961 kapi.go:107] duration metric: took 1m11.537670247s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0830 21:11:57.000301  962961 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-585092 cluster.
	I0830 21:11:57.002143  962961 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0830 21:11:57.003709  962961 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0830 21:11:57.005187  962961 out.go:177] * Enabled addons: storage-provisioner, helm-tiller, ingress-dns, cloud-spanner, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0830 21:11:57.006549  962961 addons.go:502] enable addons completed in 1m23.29246529s: enabled=[storage-provisioner helm-tiller ingress-dns cloud-spanner inspektor-gadget metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0830 21:11:57.006585  962961 start.go:233] waiting for cluster config update ...
	I0830 21:11:57.006603  962961 start.go:242] writing updated cluster config ...
	I0830 21:11:57.006905  962961 ssh_runner.go:195] Run: rm -f paused
	I0830 21:11:57.060826  962961 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 21:11:57.062820  962961 out.go:177] * Done! kubectl is now configured to use "addons-585092" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 21:09:47 UTC, ends at Wed 2023-08-30 21:14:38 UTC. --
	Aug 30 21:14:37 addons-585092 crio[714]: time="2023-08-30 21:14:37.996456479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a898837a17ca0b68dbc13d2b0463fa097b3afc855e360d74493fa3da0c9cf891,PodSandboxId:d7a013d6b60770b63e6cffafadce8da30142877ea24562b5fb4b0a7f869c52c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430070347103244,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-qn4dp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dffb803-61ff-424b-8b7a-8c6194059062,},Annotations:map[string]string{io.kubernetes.container.hash: 98dc8d9d,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced50cc91c9f5db9939434043b6230f85d870225c39f7422de4e172573ee54da,PodSandboxId:93b83ca53c8f250c9e2a77da75a6b5f313e455b689da1839ad9077f4ed5304f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1693429942577359965,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-wwzgb,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 735bb5c3-56c4-40c9-86e6-be64f2008fb3,},An
notations:map[string]string{io.kubernetes.container.hash: eaeed656,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3178bcdfd51853737e5896d67283857346214307b480011aec7d9e98cc64bbe2,PodSandboxId:c7a89afb717da9b8a4020c748b802d74e6712af3b241e24496cd469b36202d4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693429930839763532,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 2c0ac936-0527-4f27-a95f-78dd93c2afab,},Annotations:map[string]string{io.kubernetes.container.hash: d02211ef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7307096a035b5ce1def19ab0e57aab5c5f0967d57e593d474f91d0b8bcac89,PodSandboxId:66e90b92622022b494564b0637ce05d1bb2a221e6b4f8341a715450301ea1506,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1693429916438788145,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-nzqkj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0a3f7f39-d9ca-411f-bd16-37bfc119f56d,},Annotations:map[string]string{io.kubernetes.container.hash: 6b005212,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b957e68ef12bbbf6ccb09f73512983ca03cf0707dcdce267c3b299c5a670a89,PodSandboxId:baec03574c332af5dd1e01c6275005cc8a1972cb1af4d8bc1fcdf0b1bc14327c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429888653962479,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zg5kz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 04e26432-7c01-4849-9d88-82d9ad13b2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 955f2085,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9716eb6b48386ecd937396aaa6429409f5d59ddd553b76e561790f3d1d6b29ab,PodSandboxId:5118a1268eadab4677cd0a01c95010c463997168ac985e235025846697d2450a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429882956953564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ljfqr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 807bd96a-5ae0-4cb9-9f45-bc50badda3b2,},Annotations:map[string]string{io.kubernetes.container.hash: a2a89ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c60fd8cc9c4e43bc233f7ba9ea9cc61c07d0b88b6caeef1a12a7b4da50ab990,PodSandboxId:719c72cbf4c0598a9f509c010fb130af6af4530cceea36b47ce9c2e2bb217108,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693429848182759020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa83eb80-2a41-41f4-a813-be8ea50426b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51a28a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78f59caaf2414f6056399620054622fe897cb59de3e356e89c0506574e511b,PodSandboxId:1160e023f79aded366df9dac9b8bd6676b83cdd5023eceb3f24602f71321191c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693429840329703769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zdrqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b1068d-5f1f-408b-ba04-2725fe9ddb6d,},Annotations:map[string]string{io.kubernetes.container.hash: d072e75c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8fd9091e70647abe2406fa9da66af43314b8577008598083ac3187b1548d91,PodSandboxId:1efc7c22f5370487339511d354ef12b663ed61f88e6ec3d22d2d06c3380bfbef,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693429835723292987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlgz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868e2cef-bbdd-4814-bbbe-914e956a921c,},Annotations:map[string]string{io.kubernetes.container.hash: 87df0ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96550ef4cc649eb8e271deab300b82302e26408c0fa1764fb15f13ad009907a2,PodSandboxId:556e1a0fd461eb2975583cf23f4b43ddf31ebf54125fe0b2903381e440bfa03c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&
ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693429814315336739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562fe06389d606d72dbf329765d9ccba,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce1b249629482da730ef2c83534856f4b11b6c70447d1f9ceaf55a93103368e7,PodSandboxId:eeed766b92aa6ea5a1f0c42962f20eac81c71a424f1838a3345ceb113c3777bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f
702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693429814414635349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de22184a9bd892f4f303910c650889f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d55f5d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b30177c772d537c754f57f6321c0da441c0f27642c35d55a5fdbfff9198ce,PodSandboxId:3a711df5cf4ab9b7e013bd3b3015f904c608ceada7a01cb378df01187f28c269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3
c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693429814215735657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3880c51ee704ecb10a96ff7e2e7524cb,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551ac83dd87177f9bf28b6f505a6a1cf7777b8b25448539eaa515b3fbfa96001,PodSandboxId:5bd97d6688100918422a7549304ad6d2fd378632b0bb8c31e3649d240cbbee09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090
f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693429814037462141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73535f8a5585c79ff765b4eab75d21,},Annotations:map[string]string{io.kubernetes.container.hash: 37e8ba2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c1536d52-84e9-417b-b3bd-6c5ac82938d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.044978602Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ad80a2c9-06b7-4d61-8f0a-d47f54cbbb34 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.045191404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ad80a2c9-06b7-4d61-8f0a-d47f54cbbb34 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.045578258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a898837a17ca0b68dbc13d2b0463fa097b3afc855e360d74493fa3da0c9cf891,PodSandboxId:d7a013d6b60770b63e6cffafadce8da30142877ea24562b5fb4b0a7f869c52c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430070347103244,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-qn4dp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dffb803-61ff-424b-8b7a-8c6194059062,},Annotations:map[string]string{io.kubernetes.container.hash: 98dc8d9d,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced50cc91c9f5db9939434043b6230f85d870225c39f7422de4e172573ee54da,PodSandboxId:93b83ca53c8f250c9e2a77da75a6b5f313e455b689da1839ad9077f4ed5304f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1693429942577359965,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-wwzgb,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 735bb5c3-56c4-40c9-86e6-be64f2008fb3,},An
notations:map[string]string{io.kubernetes.container.hash: eaeed656,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3178bcdfd51853737e5896d67283857346214307b480011aec7d9e98cc64bbe2,PodSandboxId:c7a89afb717da9b8a4020c748b802d74e6712af3b241e24496cd469b36202d4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693429930839763532,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 2c0ac936-0527-4f27-a95f-78dd93c2afab,},Annotations:map[string]string{io.kubernetes.container.hash: d02211ef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7307096a035b5ce1def19ab0e57aab5c5f0967d57e593d474f91d0b8bcac89,PodSandboxId:66e90b92622022b494564b0637ce05d1bb2a221e6b4f8341a715450301ea1506,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1693429916438788145,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-nzqkj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0a3f7f39-d9ca-411f-bd16-37bfc119f56d,},Annotations:map[string]string{io.kubernetes.container.hash: 6b005212,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b957e68ef12bbbf6ccb09f73512983ca03cf0707dcdce267c3b299c5a670a89,PodSandboxId:baec03574c332af5dd1e01c6275005cc8a1972cb1af4d8bc1fcdf0b1bc14327c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429888653962479,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zg5kz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 04e26432-7c01-4849-9d88-82d9ad13b2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 955f2085,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9716eb6b48386ecd937396aaa6429409f5d59ddd553b76e561790f3d1d6b29ab,PodSandboxId:5118a1268eadab4677cd0a01c95010c463997168ac985e235025846697d2450a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429882956953564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ljfqr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 807bd96a-5ae0-4cb9-9f45-bc50badda3b2,},Annotations:map[string]string{io.kubernetes.container.hash: a2a89ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c60fd8cc9c4e43bc233f7ba9ea9cc61c07d0b88b6caeef1a12a7b4da50ab990,PodSandboxId:719c72cbf4c0598a9f509c010fb130af6af4530cceea36b47ce9c2e2bb217108,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693429848182759020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa83eb80-2a41-41f4-a813-be8ea50426b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51a28a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78f59caaf2414f6056399620054622fe897cb59de3e356e89c0506574e511b,PodSandboxId:1160e023f79aded366df9dac9b8bd6676b83cdd5023eceb3f24602f71321191c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693429840329703769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zdrqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b1068d-5f1f-408b-ba04-2725fe9ddb6d,},Annotations:map[string]string{io.kubernetes.container.hash: d072e75c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8fd9091e70647abe2406fa9da66af43314b8577008598083ac3187b1548d91,PodSandboxId:1efc7c22f5370487339511d354ef12b663ed61f88e6ec3d22d2d06c3380bfbef,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693429835723292987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlgz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868e2cef-bbdd-4814-bbbe-914e956a921c,},Annotations:map[string]string{io.kubernetes.container.hash: 87df0ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96550ef4cc649eb8e271deab300b82302e26408c0fa1764fb15f13ad009907a2,PodSandboxId:556e1a0fd461eb2975583cf23f4b43ddf31ebf54125fe0b2903381e440bfa03c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&
ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693429814315336739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562fe06389d606d72dbf329765d9ccba,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce1b249629482da730ef2c83534856f4b11b6c70447d1f9ceaf55a93103368e7,PodSandboxId:eeed766b92aa6ea5a1f0c42962f20eac81c71a424f1838a3345ceb113c3777bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f
702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693429814414635349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de22184a9bd892f4f303910c650889f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d55f5d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b30177c772d537c754f57f6321c0da441c0f27642c35d55a5fdbfff9198ce,PodSandboxId:3a711df5cf4ab9b7e013bd3b3015f904c608ceada7a01cb378df01187f28c269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3
c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693429814215735657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3880c51ee704ecb10a96ff7e2e7524cb,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551ac83dd87177f9bf28b6f505a6a1cf7777b8b25448539eaa515b3fbfa96001,PodSandboxId:5bd97d6688100918422a7549304ad6d2fd378632b0bb8c31e3649d240cbbee09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090
f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693429814037462141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73535f8a5585c79ff765b4eab75d21,},Annotations:map[string]string{io.kubernetes.container.hash: 37e8ba2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ad80a2c9-06b7-4d61-8f0a-d47f54cbbb34 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.095397114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b752f64c-e192-4337-9683-9cc96bdddbe0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.095520657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b752f64c-e192-4337-9683-9cc96bdddbe0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.096557613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a898837a17ca0b68dbc13d2b0463fa097b3afc855e360d74493fa3da0c9cf891,PodSandboxId:d7a013d6b60770b63e6cffafadce8da30142877ea24562b5fb4b0a7f869c52c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430070347103244,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-qn4dp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dffb803-61ff-424b-8b7a-8c6194059062,},Annotations:map[string]string{io.kubernetes.container.hash: 98dc8d9d,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced50cc91c9f5db9939434043b6230f85d870225c39f7422de4e172573ee54da,PodSandboxId:93b83ca53c8f250c9e2a77da75a6b5f313e455b689da1839ad9077f4ed5304f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1693429942577359965,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-wwzgb,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 735bb5c3-56c4-40c9-86e6-be64f2008fb3,},An
notations:map[string]string{io.kubernetes.container.hash: eaeed656,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3178bcdfd51853737e5896d67283857346214307b480011aec7d9e98cc64bbe2,PodSandboxId:c7a89afb717da9b8a4020c748b802d74e6712af3b241e24496cd469b36202d4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693429930839763532,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 2c0ac936-0527-4f27-a95f-78dd93c2afab,},Annotations:map[string]string{io.kubernetes.container.hash: d02211ef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7307096a035b5ce1def19ab0e57aab5c5f0967d57e593d474f91d0b8bcac89,PodSandboxId:66e90b92622022b494564b0637ce05d1bb2a221e6b4f8341a715450301ea1506,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1693429916438788145,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-nzqkj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0a3f7f39-d9ca-411f-bd16-37bfc119f56d,},Annotations:map[string]string{io.kubernetes.container.hash: 6b005212,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b957e68ef12bbbf6ccb09f73512983ca03cf0707dcdce267c3b299c5a670a89,PodSandboxId:baec03574c332af5dd1e01c6275005cc8a1972cb1af4d8bc1fcdf0b1bc14327c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429888653962479,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zg5kz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 04e26432-7c01-4849-9d88-82d9ad13b2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 955f2085,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9716eb6b48386ecd937396aaa6429409f5d59ddd553b76e561790f3d1d6b29ab,PodSandboxId:5118a1268eadab4677cd0a01c95010c463997168ac985e235025846697d2450a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429882956953564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ljfqr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 807bd96a-5ae0-4cb9-9f45-bc50badda3b2,},Annotations:map[string]string{io.kubernetes.container.hash: a2a89ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c60fd8cc9c4e43bc233f7ba9ea9cc61c07d0b88b6caeef1a12a7b4da50ab990,PodSandboxId:719c72cbf4c0598a9f509c010fb130af6af4530cceea36b47ce9c2e2bb217108,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693429848182759020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa83eb80-2a41-41f4-a813-be8ea50426b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51a28a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78f59caaf2414f6056399620054622fe897cb59de3e356e89c0506574e511b,PodSandboxId:1160e023f79aded366df9dac9b8bd6676b83cdd5023eceb3f24602f71321191c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693429840329703769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zdrqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b1068d-5f1f-408b-ba04-2725fe9ddb6d,},Annotations:map[string]string{io.kubernetes.container.hash: d072e75c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8fd9091e70647abe2406fa9da66af43314b8577008598083ac3187b1548d91,PodSandboxId:1efc7c22f5370487339511d354ef12b663ed61f88e6ec3d22d2d06c3380bfbef,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693429835723292987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlgz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868e2cef-bbdd-4814-bbbe-914e956a921c,},Annotations:map[string]string{io.kubernetes.container.hash: 87df0ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96550ef4cc649eb8e271deab300b82302e26408c0fa1764fb15f13ad009907a2,PodSandboxId:556e1a0fd461eb2975583cf23f4b43ddf31ebf54125fe0b2903381e440bfa03c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&
ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693429814315336739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562fe06389d606d72dbf329765d9ccba,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce1b249629482da730ef2c83534856f4b11b6c70447d1f9ceaf55a93103368e7,PodSandboxId:eeed766b92aa6ea5a1f0c42962f20eac81c71a424f1838a3345ceb113c3777bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f
702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693429814414635349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de22184a9bd892f4f303910c650889f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d55f5d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b30177c772d537c754f57f6321c0da441c0f27642c35d55a5fdbfff9198ce,PodSandboxId:3a711df5cf4ab9b7e013bd3b3015f904c608ceada7a01cb378df01187f28c269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3
c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693429814215735657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3880c51ee704ecb10a96ff7e2e7524cb,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551ac83dd87177f9bf28b6f505a6a1cf7777b8b25448539eaa515b3fbfa96001,PodSandboxId:5bd97d6688100918422a7549304ad6d2fd378632b0bb8c31e3649d240cbbee09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090
f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693429814037462141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73535f8a5585c79ff765b4eab75d21,},Annotations:map[string]string{io.kubernetes.container.hash: 37e8ba2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b752f64c-e192-4337-9683-9cc96bdddbe0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.137815261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1178a1ff-2d08-47e5-a633-a9372f865bf4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.137889891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1178a1ff-2d08-47e5-a633-a9372f865bf4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.138185147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a898837a17ca0b68dbc13d2b0463fa097b3afc855e360d74493fa3da0c9cf891,PodSandboxId:d7a013d6b60770b63e6cffafadce8da30142877ea24562b5fb4b0a7f869c52c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430070347103244,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-qn4dp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dffb803-61ff-424b-8b7a-8c6194059062,},Annotations:map[string]string{io.kubernetes.container.hash: 98dc8d9d,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced50cc91c9f5db9939434043b6230f85d870225c39f7422de4e172573ee54da,PodSandboxId:93b83ca53c8f250c9e2a77da75a6b5f313e455b689da1839ad9077f4ed5304f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1693429942577359965,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-wwzgb,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 735bb5c3-56c4-40c9-86e6-be64f2008fb3,},An
notations:map[string]string{io.kubernetes.container.hash: eaeed656,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3178bcdfd51853737e5896d67283857346214307b480011aec7d9e98cc64bbe2,PodSandboxId:c7a89afb717da9b8a4020c748b802d74e6712af3b241e24496cd469b36202d4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693429930839763532,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 2c0ac936-0527-4f27-a95f-78dd93c2afab,},Annotations:map[string]string{io.kubernetes.container.hash: d02211ef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7307096a035b5ce1def19ab0e57aab5c5f0967d57e593d474f91d0b8bcac89,PodSandboxId:66e90b92622022b494564b0637ce05d1bb2a221e6b4f8341a715450301ea1506,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1693429916438788145,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-nzqkj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0a3f7f39-d9ca-411f-bd16-37bfc119f56d,},Annotations:map[string]string{io.kubernetes.container.hash: 6b005212,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b957e68ef12bbbf6ccb09f73512983ca03cf0707dcdce267c3b299c5a670a89,PodSandboxId:baec03574c332af5dd1e01c6275005cc8a1972cb1af4d8bc1fcdf0b1bc14327c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429888653962479,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zg5kz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 04e26432-7c01-4849-9d88-82d9ad13b2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 955f2085,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9716eb6b48386ecd937396aaa6429409f5d59ddd553b76e561790f3d1d6b29ab,PodSandboxId:5118a1268eadab4677cd0a01c95010c463997168ac985e235025846697d2450a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429882956953564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ljfqr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 807bd96a-5ae0-4cb9-9f45-bc50badda3b2,},Annotations:map[string]string{io.kubernetes.container.hash: a2a89ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c60fd8cc9c4e43bc233f7ba9ea9cc61c07d0b88b6caeef1a12a7b4da50ab990,PodSandboxId:719c72cbf4c0598a9f509c010fb130af6af4530cceea36b47ce9c2e2bb217108,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693429848182759020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa83eb80-2a41-41f4-a813-be8ea50426b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51a28a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78f59caaf2414f6056399620054622fe897cb59de3e356e89c0506574e511b,PodSandboxId:1160e023f79aded366df9dac9b8bd6676b83cdd5023eceb3f24602f71321191c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693429840329703769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zdrqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b1068d-5f1f-408b-ba04-2725fe9ddb6d,},Annotations:map[string]string{io.kubernetes.container.hash: d072e75c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8fd9091e70647abe2406fa9da66af43314b8577008598083ac3187b1548d91,PodSandboxId:1efc7c22f5370487339511d354ef12b663ed61f88e6ec3d22d2d06c3380bfbef,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693429835723292987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlgz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868e2cef-bbdd-4814-bbbe-914e956a921c,},Annotations:map[string]string{io.kubernetes.container.hash: 87df0ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96550ef4cc649eb8e271deab300b82302e26408c0fa1764fb15f13ad009907a2,PodSandboxId:556e1a0fd461eb2975583cf23f4b43ddf31ebf54125fe0b2903381e440bfa03c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&
ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693429814315336739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562fe06389d606d72dbf329765d9ccba,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce1b249629482da730ef2c83534856f4b11b6c70447d1f9ceaf55a93103368e7,PodSandboxId:eeed766b92aa6ea5a1f0c42962f20eac81c71a424f1838a3345ceb113c3777bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f
702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693429814414635349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de22184a9bd892f4f303910c650889f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d55f5d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b30177c772d537c754f57f6321c0da441c0f27642c35d55a5fdbfff9198ce,PodSandboxId:3a711df5cf4ab9b7e013bd3b3015f904c608ceada7a01cb378df01187f28c269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3
c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693429814215735657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3880c51ee704ecb10a96ff7e2e7524cb,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551ac83dd87177f9bf28b6f505a6a1cf7777b8b25448539eaa515b3fbfa96001,PodSandboxId:5bd97d6688100918422a7549304ad6d2fd378632b0bb8c31e3649d240cbbee09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090
f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693429814037462141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73535f8a5585c79ff765b4eab75d21,},Annotations:map[string]string{io.kubernetes.container.hash: 37e8ba2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1178a1ff-2d08-47e5-a633-a9372f865bf4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.180388969Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=36aa54f5-1a76-4aed-8811-a1397890e7ff name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.180486716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=36aa54f5-1a76-4aed-8811-a1397890e7ff name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.180980595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a898837a17ca0b68dbc13d2b0463fa097b3afc855e360d74493fa3da0c9cf891,PodSandboxId:d7a013d6b60770b63e6cffafadce8da30142877ea24562b5fb4b0a7f869c52c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430070347103244,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-qn4dp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dffb803-61ff-424b-8b7a-8c6194059062,},Annotations:map[string]string{io.kubernetes.container.hash: 98dc8d9d,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced50cc91c9f5db9939434043b6230f85d870225c39f7422de4e172573ee54da,PodSandboxId:93b83ca53c8f250c9e2a77da75a6b5f313e455b689da1839ad9077f4ed5304f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1693429942577359965,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-wwzgb,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 735bb5c3-56c4-40c9-86e6-be64f2008fb3,},An
notations:map[string]string{io.kubernetes.container.hash: eaeed656,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3178bcdfd51853737e5896d67283857346214307b480011aec7d9e98cc64bbe2,PodSandboxId:c7a89afb717da9b8a4020c748b802d74e6712af3b241e24496cd469b36202d4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693429930839763532,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 2c0ac936-0527-4f27-a95f-78dd93c2afab,},Annotations:map[string]string{io.kubernetes.container.hash: d02211ef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7307096a035b5ce1def19ab0e57aab5c5f0967d57e593d474f91d0b8bcac89,PodSandboxId:66e90b92622022b494564b0637ce05d1bb2a221e6b4f8341a715450301ea1506,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1693429916438788145,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-nzqkj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0a3f7f39-d9ca-411f-bd16-37bfc119f56d,},Annotations:map[string]string{io.kubernetes.container.hash: 6b005212,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b957e68ef12bbbf6ccb09f73512983ca03cf0707dcdce267c3b299c5a670a89,PodSandboxId:baec03574c332af5dd1e01c6275005cc8a1972cb1af4d8bc1fcdf0b1bc14327c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429888653962479,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zg5kz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 04e26432-7c01-4849-9d88-82d9ad13b2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 955f2085,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9716eb6b48386ecd937396aaa6429409f5d59ddd553b76e561790f3d1d6b29ab,PodSandboxId:5118a1268eadab4677cd0a01c95010c463997168ac985e235025846697d2450a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429882956953564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ljfqr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 807bd96a-5ae0-4cb9-9f45-bc50badda3b2,},Annotations:map[string]string{io.kubernetes.container.hash: a2a89ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c60fd8cc9c4e43bc233f7ba9ea9cc61c07d0b88b6caeef1a12a7b4da50ab990,PodSandboxId:719c72cbf4c0598a9f509c010fb130af6af4530cceea36b47ce9c2e2bb217108,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693429848182759020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa83eb80-2a41-41f4-a813-be8ea50426b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51a28a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78f59caaf2414f6056399620054622fe897cb59de3e356e89c0506574e511b,PodSandboxId:1160e023f79aded366df9dac9b8bd6676b83cdd5023eceb3f24602f71321191c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693429840329703769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zdrqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b1068d-5f1f-408b-ba04-2725fe9ddb6d,},Annotations:map[string]string{io.kubernetes.container.hash: d072e75c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8fd9091e70647abe2406fa9da66af43314b8577008598083ac3187b1548d91,PodSandboxId:1efc7c22f5370487339511d354ef12b663ed61f88e6ec3d22d2d06c3380bfbef,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693429835723292987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlgz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868e2cef-bbdd-4814-bbbe-914e956a921c,},Annotations:map[string]string{io.kubernetes.container.hash: 87df0ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96550ef4cc649eb8e271deab300b82302e26408c0fa1764fb15f13ad009907a2,PodSandboxId:556e1a0fd461eb2975583cf23f4b43ddf31ebf54125fe0b2903381e440bfa03c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&
ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693429814315336739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562fe06389d606d72dbf329765d9ccba,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce1b249629482da730ef2c83534856f4b11b6c70447d1f9ceaf55a93103368e7,PodSandboxId:eeed766b92aa6ea5a1f0c42962f20eac81c71a424f1838a3345ceb113c3777bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f
702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693429814414635349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de22184a9bd892f4f303910c650889f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d55f5d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b30177c772d537c754f57f6321c0da441c0f27642c35d55a5fdbfff9198ce,PodSandboxId:3a711df5cf4ab9b7e013bd3b3015f904c608ceada7a01cb378df01187f28c269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3
c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693429814215735657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3880c51ee704ecb10a96ff7e2e7524cb,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551ac83dd87177f9bf28b6f505a6a1cf7777b8b25448539eaa515b3fbfa96001,PodSandboxId:5bd97d6688100918422a7549304ad6d2fd378632b0bb8c31e3649d240cbbee09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090
f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693429814037462141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73535f8a5585c79ff765b4eab75d21,},Annotations:map[string]string{io.kubernetes.container.hash: 37e8ba2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=36aa54f5-1a76-4aed-8811-a1397890e7ff name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.219321458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f97004d3-f76b-4ded-a0aa-9ee5955f5989 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.219388953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f97004d3-f76b-4ded-a0aa-9ee5955f5989 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.219723482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a898837a17ca0b68dbc13d2b0463fa097b3afc855e360d74493fa3da0c9cf891,PodSandboxId:d7a013d6b60770b63e6cffafadce8da30142877ea24562b5fb4b0a7f869c52c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430070347103244,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-qn4dp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dffb803-61ff-424b-8b7a-8c6194059062,},Annotations:map[string]string{io.kubernetes.container.hash: 98dc8d9d,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced50cc91c9f5db9939434043b6230f85d870225c39f7422de4e172573ee54da,PodSandboxId:93b83ca53c8f250c9e2a77da75a6b5f313e455b689da1839ad9077f4ed5304f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1693429942577359965,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-wwzgb,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 735bb5c3-56c4-40c9-86e6-be64f2008fb3,},An
notations:map[string]string{io.kubernetes.container.hash: eaeed656,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3178bcdfd51853737e5896d67283857346214307b480011aec7d9e98cc64bbe2,PodSandboxId:c7a89afb717da9b8a4020c748b802d74e6712af3b241e24496cd469b36202d4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693429930839763532,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 2c0ac936-0527-4f27-a95f-78dd93c2afab,},Annotations:map[string]string{io.kubernetes.container.hash: d02211ef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7307096a035b5ce1def19ab0e57aab5c5f0967d57e593d474f91d0b8bcac89,PodSandboxId:66e90b92622022b494564b0637ce05d1bb2a221e6b4f8341a715450301ea1506,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1693429916438788145,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-nzqkj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0a3f7f39-d9ca-411f-bd16-37bfc119f56d,},Annotations:map[string]string{io.kubernetes.container.hash: 6b005212,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b957e68ef12bbbf6ccb09f73512983ca03cf0707dcdce267c3b299c5a670a89,PodSandboxId:baec03574c332af5dd1e01c6275005cc8a1972cb1af4d8bc1fcdf0b1bc14327c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429888653962479,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zg5kz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 04e26432-7c01-4849-9d88-82d9ad13b2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 955f2085,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9716eb6b48386ecd937396aaa6429409f5d59ddd553b76e561790f3d1d6b29ab,PodSandboxId:5118a1268eadab4677cd0a01c95010c463997168ac985e235025846697d2450a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429882956953564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ljfqr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 807bd96a-5ae0-4cb9-9f45-bc50badda3b2,},Annotations:map[string]string{io.kubernetes.container.hash: a2a89ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c60fd8cc9c4e43bc233f7ba9ea9cc61c07d0b88b6caeef1a12a7b4da50ab990,PodSandboxId:719c72cbf4c0598a9f509c010fb130af6af4530cceea36b47ce9c2e2bb217108,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693429848182759020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa83eb80-2a41-41f4-a813-be8ea50426b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51a28a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78f59caaf2414f6056399620054622fe897cb59de3e356e89c0506574e511b,PodSandboxId:1160e023f79aded366df9dac9b8bd6676b83cdd5023eceb3f24602f71321191c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693429840329703769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zdrqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b1068d-5f1f-408b-ba04-2725fe9ddb6d,},Annotations:map[string]string{io.kubernetes.container.hash: d072e75c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8fd9091e70647abe2406fa9da66af43314b8577008598083ac3187b1548d91,PodSandboxId:1efc7c22f5370487339511d354ef12b663ed61f88e6ec3d22d2d06c3380bfbef,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693429835723292987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlgz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868e2cef-bbdd-4814-bbbe-914e956a921c,},Annotations:map[string]string{io.kubernetes.container.hash: 87df0ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96550ef4cc649eb8e271deab300b82302e26408c0fa1764fb15f13ad009907a2,PodSandboxId:556e1a0fd461eb2975583cf23f4b43ddf31ebf54125fe0b2903381e440bfa03c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&
ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693429814315336739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562fe06389d606d72dbf329765d9ccba,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce1b249629482da730ef2c83534856f4b11b6c70447d1f9ceaf55a93103368e7,PodSandboxId:eeed766b92aa6ea5a1f0c42962f20eac81c71a424f1838a3345ceb113c3777bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f
702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693429814414635349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de22184a9bd892f4f303910c650889f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d55f5d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b30177c772d537c754f57f6321c0da441c0f27642c35d55a5fdbfff9198ce,PodSandboxId:3a711df5cf4ab9b7e013bd3b3015f904c608ceada7a01cb378df01187f28c269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3
c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693429814215735657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3880c51ee704ecb10a96ff7e2e7524cb,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551ac83dd87177f9bf28b6f505a6a1cf7777b8b25448539eaa515b3fbfa96001,PodSandboxId:5bd97d6688100918422a7549304ad6d2fd378632b0bb8c31e3649d240cbbee09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090
f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693429814037462141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73535f8a5585c79ff765b4eab75d21,},Annotations:map[string]string{io.kubernetes.container.hash: 37e8ba2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f97004d3-f76b-4ded-a0aa-9ee5955f5989 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.256984293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4f82d832-acf6-48b3-96f9-a5c19b24613d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.257103991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4f82d832-acf6-48b3-96f9-a5c19b24613d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.257499048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a898837a17ca0b68dbc13d2b0463fa097b3afc855e360d74493fa3da0c9cf891,PodSandboxId:d7a013d6b60770b63e6cffafadce8da30142877ea24562b5fb4b0a7f869c52c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430070347103244,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-qn4dp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dffb803-61ff-424b-8b7a-8c6194059062,},Annotations:map[string]string{io.kubernetes.container.hash: 98dc8d9d,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced50cc91c9f5db9939434043b6230f85d870225c39f7422de4e172573ee54da,PodSandboxId:93b83ca53c8f250c9e2a77da75a6b5f313e455b689da1839ad9077f4ed5304f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1693429942577359965,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-wwzgb,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 735bb5c3-56c4-40c9-86e6-be64f2008fb3,},An
notations:map[string]string{io.kubernetes.container.hash: eaeed656,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3178bcdfd51853737e5896d67283857346214307b480011aec7d9e98cc64bbe2,PodSandboxId:c7a89afb717da9b8a4020c748b802d74e6712af3b241e24496cd469b36202d4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693429930839763532,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 2c0ac936-0527-4f27-a95f-78dd93c2afab,},Annotations:map[string]string{io.kubernetes.container.hash: d02211ef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7307096a035b5ce1def19ab0e57aab5c5f0967d57e593d474f91d0b8bcac89,PodSandboxId:66e90b92622022b494564b0637ce05d1bb2a221e6b4f8341a715450301ea1506,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1693429916438788145,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-nzqkj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0a3f7f39-d9ca-411f-bd16-37bfc119f56d,},Annotations:map[string]string{io.kubernetes.container.hash: 6b005212,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b957e68ef12bbbf6ccb09f73512983ca03cf0707dcdce267c3b299c5a670a89,PodSandboxId:baec03574c332af5dd1e01c6275005cc8a1972cb1af4d8bc1fcdf0b1bc14327c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429888653962479,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zg5kz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 04e26432-7c01-4849-9d88-82d9ad13b2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 955f2085,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9716eb6b48386ecd937396aaa6429409f5d59ddd553b76e561790f3d1d6b29ab,PodSandboxId:5118a1268eadab4677cd0a01c95010c463997168ac985e235025846697d2450a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429882956953564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ljfqr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 807bd96a-5ae0-4cb9-9f45-bc50badda3b2,},Annotations:map[string]string{io.kubernetes.container.hash: a2a89ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c60fd8cc9c4e43bc233f7ba9ea9cc61c07d0b88b6caeef1a12a7b4da50ab990,PodSandboxId:719c72cbf4c0598a9f509c010fb130af6af4530cceea36b47ce9c2e2bb217108,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693429848182759020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa83eb80-2a41-41f4-a813-be8ea50426b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51a28a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78f59caaf2414f6056399620054622fe897cb59de3e356e89c0506574e511b,PodSandboxId:1160e023f79aded366df9dac9b8bd6676b83cdd5023eceb3f24602f71321191c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693429840329703769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zdrqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b1068d-5f1f-408b-ba04-2725fe9ddb6d,},Annotations:map[string]string{io.kubernetes.container.hash: d072e75c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8fd9091e70647abe2406fa9da66af43314b8577008598083ac3187b1548d91,PodSandboxId:1efc7c22f5370487339511d354ef12b663ed61f88e6ec3d22d2d06c3380bfbef,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693429835723292987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlgz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868e2cef-bbdd-4814-bbbe-914e956a921c,},Annotations:map[string]string{io.kubernetes.container.hash: 87df0ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96550ef4cc649eb8e271deab300b82302e26408c0fa1764fb15f13ad009907a2,PodSandboxId:556e1a0fd461eb2975583cf23f4b43ddf31ebf54125fe0b2903381e440bfa03c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&
ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693429814315336739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562fe06389d606d72dbf329765d9ccba,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce1b249629482da730ef2c83534856f4b11b6c70447d1f9ceaf55a93103368e7,PodSandboxId:eeed766b92aa6ea5a1f0c42962f20eac81c71a424f1838a3345ceb113c3777bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f
702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693429814414635349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de22184a9bd892f4f303910c650889f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d55f5d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b30177c772d537c754f57f6321c0da441c0f27642c35d55a5fdbfff9198ce,PodSandboxId:3a711df5cf4ab9b7e013bd3b3015f904c608ceada7a01cb378df01187f28c269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3
c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693429814215735657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3880c51ee704ecb10a96ff7e2e7524cb,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551ac83dd87177f9bf28b6f505a6a1cf7777b8b25448539eaa515b3fbfa96001,PodSandboxId:5bd97d6688100918422a7549304ad6d2fd378632b0bb8c31e3649d240cbbee09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090
f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693429814037462141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73535f8a5585c79ff765b4eab75d21,},Annotations:map[string]string{io.kubernetes.container.hash: 37e8ba2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4f82d832-acf6-48b3-96f9-a5c19b24613d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.286439570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=62e2312d-b624-4bbb-aedf-7b832b983b46 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.286529764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=62e2312d-b624-4bbb-aedf-7b832b983b46 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.286836711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a898837a17ca0b68dbc13d2b0463fa097b3afc855e360d74493fa3da0c9cf891,PodSandboxId:d7a013d6b60770b63e6cffafadce8da30142877ea24562b5fb4b0a7f869c52c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430070347103244,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-qn4dp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dffb803-61ff-424b-8b7a-8c6194059062,},Annotations:map[string]string{io.kubernetes.container.hash: 98dc8d9d,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced50cc91c9f5db9939434043b6230f85d870225c39f7422de4e172573ee54da,PodSandboxId:93b83ca53c8f250c9e2a77da75a6b5f313e455b689da1839ad9077f4ed5304f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1693429942577359965,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-wwzgb,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 735bb5c3-56c4-40c9-86e6-be64f2008fb3,},An
notations:map[string]string{io.kubernetes.container.hash: eaeed656,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3178bcdfd51853737e5896d67283857346214307b480011aec7d9e98cc64bbe2,PodSandboxId:c7a89afb717da9b8a4020c748b802d74e6712af3b241e24496cd469b36202d4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693429930839763532,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 2c0ac936-0527-4f27-a95f-78dd93c2afab,},Annotations:map[string]string{io.kubernetes.container.hash: d02211ef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7307096a035b5ce1def19ab0e57aab5c5f0967d57e593d474f91d0b8bcac89,PodSandboxId:66e90b92622022b494564b0637ce05d1bb2a221e6b4f8341a715450301ea1506,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1693429916438788145,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-nzqkj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0a3f7f39-d9ca-411f-bd16-37bfc119f56d,},Annotations:map[string]string{io.kubernetes.container.hash: 6b005212,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b957e68ef12bbbf6ccb09f73512983ca03cf0707dcdce267c3b299c5a670a89,PodSandboxId:baec03574c332af5dd1e01c6275005cc8a1972cb1af4d8bc1fcdf0b1bc14327c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429888653962479,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zg5kz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 04e26432-7c01-4849-9d88-82d9ad13b2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 955f2085,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9716eb6b48386ecd937396aaa6429409f5d59ddd553b76e561790f3d1d6b29ab,PodSandboxId:5118a1268eadab4677cd0a01c95010c463997168ac985e235025846697d2450a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429882956953564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ljfqr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 807bd96a-5ae0-4cb9-9f45-bc50badda3b2,},Annotations:map[string]string{io.kubernetes.container.hash: a2a89ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c60fd8cc9c4e43bc233f7ba9ea9cc61c07d0b88b6caeef1a12a7b4da50ab990,PodSandboxId:719c72cbf4c0598a9f509c010fb130af6af4530cceea36b47ce9c2e2bb217108,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693429848182759020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa83eb80-2a41-41f4-a813-be8ea50426b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51a28a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78f59caaf2414f6056399620054622fe897cb59de3e356e89c0506574e511b,PodSandboxId:1160e023f79aded366df9dac9b8bd6676b83cdd5023eceb3f24602f71321191c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693429840329703769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zdrqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b1068d-5f1f-408b-ba04-2725fe9ddb6d,},Annotations:map[string]string{io.kubernetes.container.hash: d072e75c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8fd9091e70647abe2406fa9da66af43314b8577008598083ac3187b1548d91,PodSandboxId:1efc7c22f5370487339511d354ef12b663ed61f88e6ec3d22d2d06c3380bfbef,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693429835723292987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlgz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868e2cef-bbdd-4814-bbbe-914e956a921c,},Annotations:map[string]string{io.kubernetes.container.hash: 87df0ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96550ef4cc649eb8e271deab300b82302e26408c0fa1764fb15f13ad009907a2,PodSandboxId:556e1a0fd461eb2975583cf23f4b43ddf31ebf54125fe0b2903381e440bfa03c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&
ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693429814315336739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562fe06389d606d72dbf329765d9ccba,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce1b249629482da730ef2c83534856f4b11b6c70447d1f9ceaf55a93103368e7,PodSandboxId:eeed766b92aa6ea5a1f0c42962f20eac81c71a424f1838a3345ceb113c3777bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f
702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693429814414635349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de22184a9bd892f4f303910c650889f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d55f5d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b30177c772d537c754f57f6321c0da441c0f27642c35d55a5fdbfff9198ce,PodSandboxId:3a711df5cf4ab9b7e013bd3b3015f904c608ceada7a01cb378df01187f28c269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3
c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693429814215735657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3880c51ee704ecb10a96ff7e2e7524cb,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551ac83dd87177f9bf28b6f505a6a1cf7777b8b25448539eaa515b3fbfa96001,PodSandboxId:5bd97d6688100918422a7549304ad6d2fd378632b0bb8c31e3649d240cbbee09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090
f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693429814037462141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73535f8a5585c79ff765b4eab75d21,},Annotations:map[string]string{io.kubernetes.container.hash: 37e8ba2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=62e2312d-b624-4bbb-aedf-7b832b983b46 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.320586261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e7d040f6-4da0-44c4-ad77-94963d33550e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.320655190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e7d040f6-4da0-44c4-ad77-94963d33550e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:14:38 addons-585092 crio[714]: time="2023-08-30 21:14:38.321017135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a898837a17ca0b68dbc13d2b0463fa097b3afc855e360d74493fa3da0c9cf891,PodSandboxId:d7a013d6b60770b63e6cffafadce8da30142877ea24562b5fb4b0a7f869c52c7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430070347103244,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-qn4dp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1dffb803-61ff-424b-8b7a-8c6194059062,},Annotations:map[string]string{io.kubernetes.container.hash: 98dc8d9d,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced50cc91c9f5db9939434043b6230f85d870225c39f7422de4e172573ee54da,PodSandboxId:93b83ca53c8f250c9e2a77da75a6b5f313e455b689da1839ad9077f4ed5304f3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1693429942577359965,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-wwzgb,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 735bb5c3-56c4-40c9-86e6-be64f2008fb3,},An
notations:map[string]string{io.kubernetes.container.hash: eaeed656,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3178bcdfd51853737e5896d67283857346214307b480011aec7d9e98cc64bbe2,PodSandboxId:c7a89afb717da9b8a4020c748b802d74e6712af3b241e24496cd469b36202d4d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693429930839763532,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 2c0ac936-0527-4f27-a95f-78dd93c2afab,},Annotations:map[string]string{io.kubernetes.container.hash: d02211ef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7307096a035b5ce1def19ab0e57aab5c5f0967d57e593d474f91d0b8bcac89,PodSandboxId:66e90b92622022b494564b0637ce05d1bb2a221e6b4f8341a715450301ea1506,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1693429916438788145,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-nzqkj,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0a3f7f39-d9ca-411f-bd16-37bfc119f56d,},Annotations:map[string]string{io.kubernetes.container.hash: 6b005212,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b957e68ef12bbbf6ccb09f73512983ca03cf0707dcdce267c3b299c5a670a89,PodSandboxId:baec03574c332af5dd1e01c6275005cc8a1972cb1af4d8bc1fcdf0b1bc14327c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429888653962479,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zg5kz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 04e26432-7c01-4849-9d88-82d9ad13b2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 955f2085,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9716eb6b48386ecd937396aaa6429409f5d59ddd553b76e561790f3d1d6b29ab,PodSandboxId:5118a1268eadab4677cd0a01c95010c463997168ac985e235025846697d2450a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1693429882956953564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ljfqr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 807bd96a-5ae0-4cb9-9f45-bc50badda3b2,},Annotations:map[string]string{io.kubernetes.container.hash: a2a89ae2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c60fd8cc9c4e43bc233f7ba9ea9cc61c07d0b88b6caeef1a12a7b4da50ab990,PodSandboxId:719c72cbf4c0598a9f509c010fb130af6af4530cceea36b47ce9c2e2bb217108,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693429848182759020,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa83eb80-2a41-41f4-a813-be8ea50426b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51a28a26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78f59caaf2414f6056399620054622fe897cb59de3e356e89c0506574e511b,PodSandboxId:1160e023f79aded366df9dac9b8bd6676b83cdd5023eceb3f24602f71321191c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06
651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693429840329703769,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zdrqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b1068d-5f1f-408b-ba04-2725fe9ddb6d,},Annotations:map[string]string{io.kubernetes.container.hash: d072e75c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8fd9091e70647abe2406fa9da66af43314b8577008598083ac3187b1548d91,PodSandboxId:1efc7c22f5370487339511d354ef12b663ed61f88e6ec3d22d2d06c3380bfbef,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693429835723292987,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xlgz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868e2cef-bbdd-4814-bbbe-914e956a921c,},Annotations:map[string]string{io.kubernetes.container.hash: 87df0ba6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96550ef4cc649eb8e271deab300b82302e26408c0fa1764fb15f13ad009907a2,PodSandboxId:556e1a0fd461eb2975583cf23f4b43ddf31ebf54125fe0b2903381e440bfa03c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&
ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693429814315336739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562fe06389d606d72dbf329765d9ccba,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce1b249629482da730ef2c83534856f4b11b6c70447d1f9ceaf55a93103368e7,PodSandboxId:eeed766b92aa6ea5a1f0c42962f20eac81c71a424f1838a3345ceb113c3777bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f
702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693429814414635349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9de22184a9bd892f4f303910c650889f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d55f5d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c6b30177c772d537c754f57f6321c0da441c0f27642c35d55a5fdbfff9198ce,PodSandboxId:3a711df5cf4ab9b7e013bd3b3015f904c608ceada7a01cb378df01187f28c269,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3
c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693429814215735657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3880c51ee704ecb10a96ff7e2e7524cb,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551ac83dd87177f9bf28b6f505a6a1cf7777b8b25448539eaa515b3fbfa96001,PodSandboxId:5bd97d6688100918422a7549304ad6d2fd378632b0bb8c31e3649d240cbbee09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090
f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693429814037462141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-585092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da73535f8a5585c79ff765b4eab75d21,},Annotations:map[string]string{io.kubernetes.container.hash: 37e8ba2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e7d040f6-4da0-44c4-ad77-94963d33550e name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID
	a898837a17ca0       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      8 seconds ago       Running             hello-world-app           0                   d7a013d6b6077
	ced50cc91c9f5       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552                        2 minutes ago       Running             headlamp                  0                   93b83ca53c8f2
	3178bcdfd5185       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                              2 minutes ago       Running             nginx                     0                   c7a89afb717da
	3c7307096a035       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   66e90b9262202
	6b957e68ef12b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   baec03574c332
	9716eb6b48386       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   5118a1268eada
	8c60fd8cc9c4e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   719c72cbf4c05
	6b78f59caaf24       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   1160e023f79ad
	6b8fd9091e706       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                                             4 minutes ago       Running             kube-proxy                0                   1efc7c22f5370
	ce1b249629482       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   eeed766b92aa6
	96550ef4cc649       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                                             4 minutes ago       Running             kube-scheduler            0                   556e1a0fd461e
	8c6b30177c772       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                                             4 minutes ago       Running             kube-controller-manager   0                   3a711df5cf4ab
	551ac83dd8717       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                                             4 minutes ago       Running             kube-apiserver            0                   5bd97d6688100
	
	* 
	* ==> coredns [6b78f59caaf2414f6056399620054622fe897cb59de3e356e89c0506574e511b] <==
	* [INFO] 10.244.0.5:47385 - 23664 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163834s
	[INFO] 10.244.0.5:42659 - 11027 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091964s
	[INFO] 10.244.0.5:42659 - 51473 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091893s
	[INFO] 10.244.0.5:53243 - 7025 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010961s
	[INFO] 10.244.0.5:53243 - 11379 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000190319s
	[INFO] 10.244.0.5:41323 - 43836 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085518s
	[INFO] 10.244.0.5:41323 - 574 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000561079s
	[INFO] 10.244.0.5:52762 - 21195 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000115675s
	[INFO] 10.244.0.5:52762 - 59078 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000458719s
	[INFO] 10.244.0.5:33544 - 33323 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000048338s
	[INFO] 10.244.0.5:33544 - 4648 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000150341s
	[INFO] 10.244.0.5:35097 - 17898 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078166s
	[INFO] 10.244.0.5:35097 - 18664 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000278785s
	[INFO] 10.244.0.5:52819 - 63003 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079992s
	[INFO] 10.244.0.5:52819 - 65049 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000301217s
	[INFO] 10.244.0.18:36721 - 28129 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276808s
	[INFO] 10.244.0.18:60015 - 33059 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000094081s
	[INFO] 10.244.0.18:58937 - 24141 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000165401s
	[INFO] 10.244.0.18:59059 - 23085 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000176005s
	[INFO] 10.244.0.18:34620 - 38791 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076916s
	[INFO] 10.244.0.18:33508 - 36496 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087714s
	[INFO] 10.244.0.18:50636 - 46285 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001300482s
	[INFO] 10.244.0.18:51585 - 55759 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001907273s
	[INFO] 10.244.0.21:46015 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00018326s
	[INFO] 10.244.0.21:54951 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000451212s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-585092
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-585092
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=addons-585092
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T21_10_21_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-585092
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-585092
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 21:14:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 21:12:54 +0000   Wed, 30 Aug 2023 21:10:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 21:12:54 +0000   Wed, 30 Aug 2023 21:10:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 21:12:54 +0000   Wed, 30 Aug 2023 21:10:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 21:12:54 +0000   Wed, 30 Aug 2023 21:10:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    addons-585092
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e44fd702b06949328996d69153c98564
	  System UUID:                e44fd702-b069-4932-8996-d69153c98564
	  Boot ID:                    2f047f18-ee9e-4805-868d-57ad8234bb84
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-qn4dp         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-d4c87556c-nzqkj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  headlamp                    headlamp-699c48fb74-wwzgb                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 coredns-5dd5756b68-zdrqg                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m5s
	  kube-system                 etcd-addons-585092                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-addons-585092             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-addons-585092    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-xlgz2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-scheduler-addons-585092             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m52s  kube-proxy       
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node addons-585092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node addons-585092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node addons-585092 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m17s  kubelet          Node addons-585092 status is now: NodeReady
	  Normal  RegisteredNode           4m6s   node-controller  Node addons-585092 event: Registered Node addons-585092 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.411005] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.451303] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154292] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.010839] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug30 21:10] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.105745] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.142796] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.110388] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.202975] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +10.111927] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +8.266303] systemd-fstab-generator[1242]: Ignoring "noauto" for root device
	[ +20.646492] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.033939] kauditd_printk_skb: 31 callbacks suppressed
	[Aug30 21:11] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.115641] kauditd_printk_skb: 20 callbacks suppressed
	[ +33.918295] kauditd_printk_skb: 1 callbacks suppressed
	[Aug30 21:12] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.405106] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.305126] kauditd_printk_skb: 26 callbacks suppressed
	[  +9.614087] kauditd_printk_skb: 2 callbacks suppressed
	[Aug30 21:13] kauditd_printk_skb: 12 callbacks suppressed
	[Aug30 21:14] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [ce1b249629482da730ef2c83534856f4b11b6c70447d1f9ceaf55a93103368e7] <==
	* {"level":"warn","ts":"2023-08-30T21:12:05.458046Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T21:12:04.949811Z","time spent":"508.176525ms","remote":"127.0.0.1:55250","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4401,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-7c66d45ddc-pflsn\" mod_revision:1111 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-7c66d45ddc-pflsn\" value_size:4335 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-7c66d45ddc-pflsn\" > >"}
	{"level":"info","ts":"2023-08-30T21:12:19.870926Z","caller":"traceutil/trace.go:171","msg":"trace[388888096] linearizableReadLoop","detail":"{readStateIndex:1335; appliedIndex:1334; }","duration":"237.171346ms","start":"2023-08-30T21:12:19.633732Z","end":"2023-08-30T21:12:19.870903Z","steps":["trace[388888096] 'read index received'  (duration: 237.047765ms)","trace[388888096] 'applied index is now lower than readState.Index'  (duration: 122.831µs)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T21:12:19.871194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.391366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-08-30T21:12:19.8713Z","caller":"traceutil/trace.go:171","msg":"trace[1644846778] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1289; }","duration":"237.583156ms","start":"2023-08-30T21:12:19.633707Z","end":"2023-08-30T21:12:19.87129Z","steps":["trace[1644846778] 'agreement among raft nodes before linearized reading'  (duration: 237.352284ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T21:12:22.386758Z","caller":"traceutil/trace.go:171","msg":"trace[335184994] linearizableReadLoop","detail":"{readStateIndex:1354; appliedIndex:1353; }","duration":"453.035721ms","start":"2023-08-30T21:12:21.933707Z","end":"2023-08-30T21:12:22.386743Z","steps":["trace[335184994] 'read index received'  (duration: 452.898249ms)","trace[335184994] 'applied index is now lower than readState.Index'  (duration: 136.991µs)"],"step_count":2}
	{"level":"info","ts":"2023-08-30T21:12:22.386881Z","caller":"traceutil/trace.go:171","msg":"trace[476918882] transaction","detail":"{read_only:false; response_revision:1307; number_of_response:1; }","duration":"486.913306ms","start":"2023-08-30T21:12:21.89996Z","end":"2023-08-30T21:12:22.386874Z","steps":["trace[476918882] 'process raft request'  (duration: 486.694807ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:12:22.387016Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T21:12:21.899941Z","time spent":"486.959685ms","remote":"127.0.0.1:55244","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1291 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-08-30T21:12:22.387182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.861795ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2023-08-30T21:12:22.388517Z","caller":"traceutil/trace.go:171","msg":"trace[54075878] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1307; }","duration":"267.20204ms","start":"2023-08-30T21:12:22.121303Z","end":"2023-08-30T21:12:22.388505Z","steps":["trace[54075878] 'agreement among raft nodes before linearized reading'  (duration: 265.832238ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:12:22.387346Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"453.656429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-30T21:12:22.388665Z","caller":"traceutil/trace.go:171","msg":"trace[1036845496] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1307; }","duration":"454.976217ms","start":"2023-08-30T21:12:21.933681Z","end":"2023-08-30T21:12:22.388657Z","steps":["trace[1036845496] 'agreement among raft nodes before linearized reading'  (duration: 453.638178ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:12:22.388689Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T21:12:21.933667Z","time spent":"455.013417ms","remote":"127.0.0.1:55210","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-08-30T21:12:22.387385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.889239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2023-08-30T21:12:22.388839Z","caller":"traceutil/trace.go:171","msg":"trace[1808432028] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1307; }","duration":"112.337553ms","start":"2023-08-30T21:12:22.276491Z","end":"2023-08-30T21:12:22.388829Z","steps":["trace[1808432028] 'agreement among raft nodes before linearized reading'  (duration: 110.870933ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T21:12:34.430514Z","caller":"traceutil/trace.go:171","msg":"trace[372350160] transaction","detail":"{read_only:false; response_revision:1348; number_of_response:1; }","duration":"215.276282ms","start":"2023-08-30T21:12:34.214905Z","end":"2023-08-30T21:12:34.430181Z","steps":["trace[372350160] 'process raft request'  (duration: 215.093212ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T21:12:34.552725Z","caller":"traceutil/trace.go:171","msg":"trace[1121786640] linearizableReadLoop","detail":"{readStateIndex:1399; appliedIndex:1397; }","duration":"230.011836ms","start":"2023-08-30T21:12:34.322698Z","end":"2023-08-30T21:12:34.55271Z","steps":["trace[1121786640] 'read index received'  (duration: 107.39944ms)","trace[1121786640] 'applied index is now lower than readState.Index'  (duration: 122.611397ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T21:12:34.552956Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.259692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5638"}
	{"level":"warn","ts":"2023-08-30T21:12:34.552997Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.017797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshots0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-30T21:12:34.553055Z","caller":"traceutil/trace.go:171","msg":"trace[453887361] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshots0; response_count:0; response_revision:1349; }","duration":"167.085023ms","start":"2023-08-30T21:12:34.385963Z","end":"2023-08-30T21:12:34.553048Z","steps":["trace[453887361] 'agreement among raft nodes before linearized reading'  (duration: 167.001748ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T21:12:34.553104Z","caller":"traceutil/trace.go:171","msg":"trace[1431739413] transaction","detail":"{read_only:false; response_revision:1349; number_of_response:1; }","duration":"333.430456ms","start":"2023-08-30T21:12:34.219666Z","end":"2023-08-30T21:12:34.553097Z","steps":["trace[1431739413] 'process raft request'  (duration: 330.776214ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:12:34.5532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.772747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-08-30T21:12:34.553308Z","caller":"traceutil/trace.go:171","msg":"trace[710153531] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1349; }","duration":"104.882601ms","start":"2023-08-30T21:12:34.448419Z","end":"2023-08-30T21:12:34.553302Z","steps":["trace[710153531] 'agreement among raft nodes before linearized reading'  (duration: 104.755593ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:12:34.553342Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T21:12:34.219623Z","time spent":"333.528048ms","remote":"127.0.0.1:55270","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1336 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2023-08-30T21:12:34.553012Z","caller":"traceutil/trace.go:171","msg":"trace[1805480084] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1349; }","duration":"230.33111ms","start":"2023-08-30T21:12:34.322673Z","end":"2023-08-30T21:12:34.553004Z","steps":["trace[1805480084] 'agreement among raft nodes before linearized reading'  (duration: 230.199011ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:12:39.760081Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.604566ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3841159575249806069 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:354e8a48485ae2f4>","response":"size:39"}
	
	* 
	* ==> gcp-auth [3c7307096a035b5ce1def19ab0e57aab5c5f0967d57e593d474f91d0b8bcac89] <==
	* 2023/08/30 21:11:56 GCP Auth Webhook started!
	2023/08/30 21:12:02 Ready to marshal response ...
	2023/08/30 21:12:02 Ready to write response ...
	2023/08/30 21:12:06 Ready to marshal response ...
	2023/08/30 21:12:06 Ready to write response ...
	2023/08/30 21:12:07 Ready to marshal response ...
	2023/08/30 21:12:07 Ready to write response ...
	2023/08/30 21:12:09 Ready to marshal response ...
	2023/08/30 21:12:09 Ready to write response ...
	2023/08/30 21:12:16 Ready to marshal response ...
	2023/08/30 21:12:16 Ready to write response ...
	2023/08/30 21:12:16 Ready to marshal response ...
	2023/08/30 21:12:16 Ready to write response ...
	2023/08/30 21:12:16 Ready to marshal response ...
	2023/08/30 21:12:16 Ready to write response ...
	2023/08/30 21:12:29 Ready to marshal response ...
	2023/08/30 21:12:29 Ready to write response ...
	2023/08/30 21:12:50 Ready to marshal response ...
	2023/08/30 21:12:50 Ready to write response ...
	2023/08/30 21:14:27 Ready to marshal response ...
	2023/08/30 21:14:27 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:14:38 up 4 min,  0 users,  load average: 0.60, 1.46, 0.75
	Linux addons-585092 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [551ac83dd87177f9bf28b6f505a6a1cf7777b8b25448539eaa515b3fbfa96001] <==
	* I0830 21:13:08.705842       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:13:08.705921       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0830 21:13:08.741783       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:13:08.741890       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0830 21:13:08.741941       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:13:08.742371       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0830 21:13:08.773464       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:13:08.773557       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0830 21:13:08.773801       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:13:08.773865       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0830 21:13:08.795178       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	E0830 21:13:08.805059       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0830 21:13:08.805105       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0830 21:13:08.807513       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0830 21:13:08.811463       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	W0830 21:13:09.706923       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0830 21:13:09.773961       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0830 21:13:09.797477       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0830 21:13:20.713420       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0830 21:13:20.713519       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 21:13:20.713585       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 21:13:20.713654       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 21:14:27.950840       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.196.149"}
	E0830 21:14:30.531810       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [8c6b30177c772d537c754f57f6321c0da441c0f27642c35d55a5fdbfff9198ce] <==
	* E0830 21:13:35.120015       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0830 21:13:42.118411       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:13:42.118466       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0830 21:13:47.475783       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:13:47.475813       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0830 21:13:49.075387       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:13:49.075552       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0830 21:14:21.374024       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:14:21.374159       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0830 21:14:22.214205       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:14:22.214378       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0830 21:14:27.691623       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0830 21:14:27.728486       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-qn4dp"
	I0830 21:14:27.746204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="54.527408ms"
	I0830 21:14:27.762195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.780277ms"
	I0830 21:14:27.763078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="91.188µs"
	W0830 21:14:28.272884       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:14:28.272938       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0830 21:14:30.409838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5dcd45b5bf" duration="4.6µs"
	I0830 21:14:30.411077       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0830 21:14:30.412025       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0830 21:14:31.405045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.019364ms"
	I0830 21:14:31.405872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.945µs"
	W0830 21:14:32.221345       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:14:32.221506       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [6b8fd9091e70647abe2406fa9da66af43314b8577008598083ac3187b1548d91] <==
	* I0830 21:10:44.297445       1 server_others.go:69] "Using iptables proxy"
	I0830 21:10:44.841501       1 node.go:141] Successfully retrieved node IP: 192.168.39.136
	I0830 21:10:45.887648       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0830 21:10:45.887696       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0830 21:10:45.902577       1 server_others.go:152] "Using iptables Proxier"
	I0830 21:10:45.902644       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 21:10:45.902778       1 server.go:846] "Version info" version="v1.28.1"
	I0830 21:10:45.902815       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 21:10:45.943166       1 config.go:188] "Starting service config controller"
	I0830 21:10:45.950806       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 21:10:45.950874       1 config.go:97] "Starting endpoint slice config controller"
	I0830 21:10:45.950881       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 21:10:45.963906       1 config.go:315] "Starting node config controller"
	I0830 21:10:45.963918       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 21:10:46.092561       1 shared_informer.go:318] Caches are synced for node config
	I0830 21:10:46.092695       1 shared_informer.go:318] Caches are synced for service config
	I0830 21:10:46.092714       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [96550ef4cc649eb8e271deab300b82302e26408c0fa1764fb15f13ad009907a2] <==
	* W0830 21:10:18.007591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0830 21:10:18.007600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0830 21:10:18.007855       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 21:10:18.007950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0830 21:10:18.882610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 21:10:18.882704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0830 21:10:18.892016       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 21:10:18.892085       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 21:10:18.903993       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 21:10:18.904065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0830 21:10:18.980328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 21:10:18.980415       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0830 21:10:18.989872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0830 21:10:18.989907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0830 21:10:19.037305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 21:10:19.037399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0830 21:10:19.042929       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 21:10:19.042991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0830 21:10:19.202443       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 21:10:19.202523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0830 21:10:19.226064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 21:10:19.226381       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0830 21:10:19.227439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 21:10:19.227483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0830 21:10:21.499422       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 21:09:47 UTC, ends at Wed 2023-08-30 21:14:38 UTC. --
	Aug 30 21:14:27 addons-585092 kubelet[1249]: I0830 21:14:27.735108    1249 memory_manager.go:346] "RemoveStaleState removing state" podUID="2f381769-da39-41be-8683-6112f526b5ab" containerName="node-driver-registrar"
	Aug 30 21:14:27 addons-585092 kubelet[1249]: I0830 21:14:27.795877    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkw8\" (UniqueName: \"kubernetes.io/projected/1dffb803-61ff-424b-8b7a-8c6194059062-kube-api-access-vmkw8\") pod \"hello-world-app-5d77478584-qn4dp\" (UID: \"1dffb803-61ff-424b-8b7a-8c6194059062\") " pod="default/hello-world-app-5d77478584-qn4dp"
	Aug 30 21:14:27 addons-585092 kubelet[1249]: I0830 21:14:27.795924    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1dffb803-61ff-424b-8b7a-8c6194059062-gcp-creds\") pod \"hello-world-app-5d77478584-qn4dp\" (UID: \"1dffb803-61ff-424b-8b7a-8c6194059062\") " pod="default/hello-world-app-5d77478584-qn4dp"
	Aug 30 21:14:29 addons-585092 kubelet[1249]: I0830 21:14:29.106167    1249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2f5t\" (UniqueName: \"kubernetes.io/projected/8af39e86-88ce-447e-b865-64008666e1f3-kube-api-access-f2f5t\") pod \"8af39e86-88ce-447e-b865-64008666e1f3\" (UID: \"8af39e86-88ce-447e-b865-64008666e1f3\") "
	Aug 30 21:14:29 addons-585092 kubelet[1249]: I0830 21:14:29.111047    1249 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af39e86-88ce-447e-b865-64008666e1f3-kube-api-access-f2f5t" (OuterVolumeSpecName: "kube-api-access-f2f5t") pod "8af39e86-88ce-447e-b865-64008666e1f3" (UID: "8af39e86-88ce-447e-b865-64008666e1f3"). InnerVolumeSpecName "kube-api-access-f2f5t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 30 21:14:29 addons-585092 kubelet[1249]: I0830 21:14:29.206587    1249 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f2f5t\" (UniqueName: \"kubernetes.io/projected/8af39e86-88ce-447e-b865-64008666e1f3-kube-api-access-f2f5t\") on node \"addons-585092\" DevicePath \"\""
	Aug 30 21:14:29 addons-585092 kubelet[1249]: I0830 21:14:29.368688    1249 scope.go:117] "RemoveContainer" containerID="0a46dcb29a3b975915d0c50affc9f056487eb5a9e0d2d0296b03fe40d0c1a200"
	Aug 30 21:14:29 addons-585092 kubelet[1249]: I0830 21:14:29.544337    1249 scope.go:117] "RemoveContainer" containerID="0a46dcb29a3b975915d0c50affc9f056487eb5a9e0d2d0296b03fe40d0c1a200"
	Aug 30 21:14:29 addons-585092 kubelet[1249]: E0830 21:14:29.567340    1249 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a46dcb29a3b975915d0c50affc9f056487eb5a9e0d2d0296b03fe40d0c1a200\": container with ID starting with 0a46dcb29a3b975915d0c50affc9f056487eb5a9e0d2d0296b03fe40d0c1a200 not found: ID does not exist" containerID="0a46dcb29a3b975915d0c50affc9f056487eb5a9e0d2d0296b03fe40d0c1a200"
	Aug 30 21:14:29 addons-585092 kubelet[1249]: I0830 21:14:29.567417    1249 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a46dcb29a3b975915d0c50affc9f056487eb5a9e0d2d0296b03fe40d0c1a200"} err="failed to get container status \"0a46dcb29a3b975915d0c50affc9f056487eb5a9e0d2d0296b03fe40d0c1a200\": rpc error: code = NotFound desc = could not find container \"0a46dcb29a3b975915d0c50affc9f056487eb5a9e0d2d0296b03fe40d0c1a200\": container with ID starting with 0a46dcb29a3b975915d0c50affc9f056487eb5a9e0d2d0296b03fe40d0c1a200 not found: ID does not exist"
	Aug 30 21:14:31 addons-585092 kubelet[1249]: I0830 21:14:31.100404    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="04e26432-7c01-4849-9d88-82d9ad13b2ff" path="/var/lib/kubelet/pods/04e26432-7c01-4849-9d88-82d9ad13b2ff/volumes"
	Aug 30 21:14:31 addons-585092 kubelet[1249]: I0830 21:14:31.100923    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="807bd96a-5ae0-4cb9-9f45-bc50badda3b2" path="/var/lib/kubelet/pods/807bd96a-5ae0-4cb9-9f45-bc50badda3b2/volumes"
	Aug 30 21:14:31 addons-585092 kubelet[1249]: I0830 21:14:31.101431    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8af39e86-88ce-447e-b865-64008666e1f3" path="/var/lib/kubelet/pods/8af39e86-88ce-447e-b865-64008666e1f3/volumes"
	Aug 30 21:14:31 addons-585092 kubelet[1249]: I0830 21:14:31.396279    1249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-qn4dp" podStartSLOduration=2.8921570450000003 podCreationTimestamp="2023-08-30 21:14:27 +0000 UTC" firstStartedPulling="2023-08-30 21:14:28.817159994 +0000 UTC m=+247.900089530" lastFinishedPulling="2023-08-30 21:14:30.321177044 +0000 UTC m=+249.404106577" observedRunningTime="2023-08-30 21:14:31.395169519 +0000 UTC m=+250.478099065" watchObservedRunningTime="2023-08-30 21:14:31.396174092 +0000 UTC m=+250.479103645"
	Aug 30 21:14:33 addons-585092 kubelet[1249]: I0830 21:14:33.843372    1249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa-webhook-cert\") pod \"3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa\" (UID: \"3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa\") "
	Aug 30 21:14:33 addons-585092 kubelet[1249]: I0830 21:14:33.843460    1249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7rnq\" (UniqueName: \"kubernetes.io/projected/3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa-kube-api-access-t7rnq\") pod \"3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa\" (UID: \"3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa\") "
	Aug 30 21:14:33 addons-585092 kubelet[1249]: I0830 21:14:33.847813    1249 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa-kube-api-access-t7rnq" (OuterVolumeSpecName: "kube-api-access-t7rnq") pod "3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa" (UID: "3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa"). InnerVolumeSpecName "kube-api-access-t7rnq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 30 21:14:33 addons-585092 kubelet[1249]: I0830 21:14:33.853192    1249 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa" (UID: "3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 21:14:33 addons-585092 kubelet[1249]: I0830 21:14:33.944488    1249 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t7rnq\" (UniqueName: \"kubernetes.io/projected/3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa-kube-api-access-t7rnq\") on node \"addons-585092\" DevicePath \"\""
	Aug 30 21:14:33 addons-585092 kubelet[1249]: I0830 21:14:33.944518    1249 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa-webhook-cert\") on node \"addons-585092\" DevicePath \"\""
	Aug 30 21:14:34 addons-585092 kubelet[1249]: I0830 21:14:34.394423    1249 scope.go:117] "RemoveContainer" containerID="0462b7962719fab3fca08081cf43b903f6832cb2afce4a31ed7ae9913b358463"
	Aug 30 21:14:34 addons-585092 kubelet[1249]: I0830 21:14:34.429480    1249 scope.go:117] "RemoveContainer" containerID="0462b7962719fab3fca08081cf43b903f6832cb2afce4a31ed7ae9913b358463"
	Aug 30 21:14:34 addons-585092 kubelet[1249]: E0830 21:14:34.430130    1249 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0462b7962719fab3fca08081cf43b903f6832cb2afce4a31ed7ae9913b358463\": container with ID starting with 0462b7962719fab3fca08081cf43b903f6832cb2afce4a31ed7ae9913b358463 not found: ID does not exist" containerID="0462b7962719fab3fca08081cf43b903f6832cb2afce4a31ed7ae9913b358463"
	Aug 30 21:14:34 addons-585092 kubelet[1249]: I0830 21:14:34.430167    1249 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0462b7962719fab3fca08081cf43b903f6832cb2afce4a31ed7ae9913b358463"} err="failed to get container status \"0462b7962719fab3fca08081cf43b903f6832cb2afce4a31ed7ae9913b358463\": rpc error: code = NotFound desc = could not find container \"0462b7962719fab3fca08081cf43b903f6832cb2afce4a31ed7ae9913b358463\": container with ID starting with 0462b7962719fab3fca08081cf43b903f6832cb2afce4a31ed7ae9913b358463 not found: ID does not exist"
	Aug 30 21:14:35 addons-585092 kubelet[1249]: I0830 21:14:35.099932    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa" path="/var/lib/kubelet/pods/3ddf5b0f-5c47-48a8-8a40-701a4c34c3aa/volumes"
	
	* 
	* ==> storage-provisioner [8c60fd8cc9c4e43bc233f7ba9ea9cc61c07d0b88b6caeef1a12a7b4da50ab990] <==
	* I0830 21:10:48.767880       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 21:10:48.852551       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 21:10:48.854421       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 21:10:49.107443       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 21:10:49.122604       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-585092_f9a15314-0d3c-4634-b8b0-adedbb8abb79!
	I0830 21:10:49.121908       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f42d741b-e555-4dab-9abe-10781ccf523a", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-585092_f9a15314-0d3c-4634-b8b0-adedbb8abb79 became leader
	I0830 21:10:49.223508       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-585092_f9a15314-0d3c-4634-b8b0-adedbb8abb79!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-585092 -n addons-585092
helpers_test.go:261: (dbg) Run:  kubectl --context addons-585092 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (162.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-585092
addons_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-585092: exit status 82 (2m1.480784673s)

                                                
                                                
-- stdout --
	* Stopping node "addons-585092"  ...
	* Stopping node "addons-585092"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:150: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-585092" : exit status 82
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-585092
addons_test.go:152: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-585092: exit status 11 (21.558273288s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-585092" : exit status 11
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-585092
addons_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-585092: exit status 11 (6.142734393s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-585092" : exit status 11
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-585092
addons_test.go:161: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-585092: exit status 11 (6.144424877s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-585092" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.33s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (163.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-306023 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-306023 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.827343111s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-306023 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-306023 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f3dacb2e-a4c3-414a-9e73-713b175fc41f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f3dacb2e-a4c3-414a-9e73-713b175fc41f] Running
E0830 21:24:40.921230  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.013967722s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306023 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0830 21:26:49.715812  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:26:49.721100  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:26:49.731261  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:26:49.751986  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:26:49.792309  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:26:49.872650  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:26:50.033130  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:26:50.353757  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:26:50.994792  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:26:52.275352  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-306023 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.515518012s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-306023 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306023 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.247
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306023 addons disable ingress-dns --alsologtostderr -v=1
E0830 21:26:54.836098  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-306023 addons disable ingress-dns --alsologtostderr -v=1: (2.581338688s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306023 addons disable ingress --alsologtostderr -v=1
E0830 21:26:57.076177  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:26:59.956821  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-306023 addons disable ingress --alsologtostderr -v=1: (7.556356579s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-306023 -n ingress-addon-legacy-306023
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306023 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-306023 logs -n 25: (1.005912538s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-944257                                                         | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-944257                                                         | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-944257 image load --daemon                                     | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-944257                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-944257 image ls                                                | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	| image          | functional-944257 image save                                              | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-944257                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-944257 image rm                                                | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-944257                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-944257 image ls                                                | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	| image          | functional-944257 image load                                              | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-944257 image ls                                                | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	| image          | functional-944257 image save --daemon                                     | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-944257                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-944257                                                         | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-944257                                                         | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-944257                                                         | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-944257 ssh pgrep                                               | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-944257 image build -t                                          | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | localhost/my-image:functional-944257                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-944257                                                         | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-944257 image ls                                                | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	| delete         | -p functional-944257                                                      | functional-944257           | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:22 UTC |
	| start          | -p ingress-addon-legacy-306023                                            | ingress-addon-legacy-306023 | jenkins | v1.31.2 | 30 Aug 23 21:22 UTC | 30 Aug 23 21:24 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-306023                                               | ingress-addon-legacy-306023 | jenkins | v1.31.2 | 30 Aug 23 21:24 UTC | 30 Aug 23 21:24 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-306023                                               | ingress-addon-legacy-306023 | jenkins | v1.31.2 | 30 Aug 23 21:24 UTC | 30 Aug 23 21:24 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-306023                                               | ingress-addon-legacy-306023 | jenkins | v1.31.2 | 30 Aug 23 21:24 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-306023 ip                                            | ingress-addon-legacy-306023 | jenkins | v1.31.2 | 30 Aug 23 21:26 UTC | 30 Aug 23 21:26 UTC |
	| addons         | ingress-addon-legacy-306023                                               | ingress-addon-legacy-306023 | jenkins | v1.31.2 | 30 Aug 23 21:26 UTC | 30 Aug 23 21:26 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-306023                                               | ingress-addon-legacy-306023 | jenkins | v1.31.2 | 30 Aug 23 21:26 UTC | 30 Aug 23 21:27 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:22:45
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:22:45.079810  971113 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:22:45.079997  971113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:22:45.080010  971113 out.go:309] Setting ErrFile to fd 2...
	I0830 21:22:45.080017  971113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:22:45.080356  971113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 21:22:45.081102  971113 out.go:303] Setting JSON to false
	I0830 21:22:45.082165  971113 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11112,"bootTime":1693419453,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 21:22:45.082233  971113 start.go:138] virtualization: kvm guest
	I0830 21:22:45.084726  971113 out.go:177] * [ingress-addon-legacy-306023] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 21:22:45.086331  971113 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 21:22:45.087711  971113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:22:45.086378  971113 notify.go:220] Checking for updates...
	I0830 21:22:45.090401  971113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:22:45.091865  971113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:22:45.093288  971113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 21:22:45.094576  971113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:22:45.096041  971113 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:22:45.131404  971113 out.go:177] * Using the kvm2 driver based on user configuration
	I0830 21:22:45.132776  971113 start.go:298] selected driver: kvm2
	I0830 21:22:45.132787  971113 start.go:902] validating driver "kvm2" against <nil>
	I0830 21:22:45.132806  971113 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:22:45.133530  971113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:22:45.133612  971113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 21:22:45.147753  971113 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 21:22:45.147815  971113 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 21:22:45.148028  971113 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 21:22:45.148064  971113 cni.go:84] Creating CNI manager for ""
	I0830 21:22:45.148070  971113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 21:22:45.148082  971113 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0830 21:22:45.148095  971113 start_flags.go:319] config:
	{Name:ingress-addon-legacy-306023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-306023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:22:45.148270  971113 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:22:45.150036  971113 out.go:177] * Starting control plane node ingress-addon-legacy-306023 in cluster ingress-addon-legacy-306023
	I0830 21:22:45.151434  971113 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0830 21:22:45.176426  971113 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0830 21:22:45.176447  971113 cache.go:57] Caching tarball of preloaded images
	I0830 21:22:45.176603  971113 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0830 21:22:45.178235  971113 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0830 21:22:45.179712  971113 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0830 21:22:45.213571  971113 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0830 21:22:49.866365  971113 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0830 21:22:49.866454  971113 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0830 21:22:50.818061  971113 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0830 21:22:50.818411  971113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/config.json ...
	I0830 21:22:50.818441  971113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/config.json: {Name:mk40494e5b4a6cfef41aab020bd9bc2060011e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:22:50.818668  971113 start.go:365] acquiring machines lock for ingress-addon-legacy-306023: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 21:22:50.818711  971113 start.go:369] acquired machines lock for "ingress-addon-legacy-306023" in 22.142µs
	I0830 21:22:50.818737  971113 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-306023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-306023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:22:50.818850  971113 start.go:125] createHost starting for "" (driver="kvm2")
	I0830 21:22:50.821065  971113 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0830 21:22:50.821228  971113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:22:50.821283  971113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:22:50.835663  971113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I0830 21:22:50.836152  971113 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:22:50.836759  971113 main.go:141] libmachine: Using API Version  1
	I0830 21:22:50.836784  971113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:22:50.837144  971113 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:22:50.837335  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetMachineName
	I0830 21:22:50.837473  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .DriverName
	I0830 21:22:50.837641  971113 start.go:159] libmachine.API.Create for "ingress-addon-legacy-306023" (driver="kvm2")
	I0830 21:22:50.837673  971113 client.go:168] LocalClient.Create starting
	I0830 21:22:50.837700  971113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem
	I0830 21:22:50.837732  971113 main.go:141] libmachine: Decoding PEM data...
	I0830 21:22:50.837748  971113 main.go:141] libmachine: Parsing certificate...
	I0830 21:22:50.837807  971113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem
	I0830 21:22:50.837824  971113 main.go:141] libmachine: Decoding PEM data...
	I0830 21:22:50.837836  971113 main.go:141] libmachine: Parsing certificate...
	I0830 21:22:50.837853  971113 main.go:141] libmachine: Running pre-create checks...
	I0830 21:22:50.837863  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .PreCreateCheck
	I0830 21:22:50.838199  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetConfigRaw
	I0830 21:22:50.838592  971113 main.go:141] libmachine: Creating machine...
	I0830 21:22:50.838607  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .Create
	I0830 21:22:50.838722  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Creating KVM machine...
	I0830 21:22:50.839812  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found existing default KVM network
	I0830 21:22:50.840509  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:50.840374  971153 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298a0}
	I0830 21:22:50.845647  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | trying to create private KVM network mk-ingress-addon-legacy-306023 192.168.39.0/24...
	I0830 21:22:50.913620  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Setting up store path in /home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023 ...
	I0830 21:22:50.913659  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | private KVM network mk-ingress-addon-legacy-306023 192.168.39.0/24 created
	I0830 21:22:50.913673  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Building disk image from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 21:22:50.913688  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:50.913530  971153 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:22:50.913714  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Downloading /home/jenkins/minikube-integration/17114-955377/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0830 21:22:51.142635  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:51.142515  971153 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/id_rsa...
	I0830 21:22:51.535863  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:51.535700  971153 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/ingress-addon-legacy-306023.rawdisk...
	I0830 21:22:51.535894  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Writing magic tar header
	I0830 21:22:51.535916  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Writing SSH key tar header
	I0830 21:22:51.536039  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:51.535946  971153 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023 ...
	I0830 21:22:51.536121  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023
	I0830 21:22:51.536148  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines
	I0830 21:22:51.536169  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023 (perms=drwx------)
	I0830 21:22:51.536193  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines (perms=drwxr-xr-x)
	I0830 21:22:51.536211  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube (perms=drwxr-xr-x)
	I0830 21:22:51.536230  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377 (perms=drwxrwxr-x)
	I0830 21:22:51.536250  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0830 21:22:51.536267  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:22:51.536286  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377
	I0830 21:22:51.536301  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0830 21:22:51.536318  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Checking permissions on dir: /home/jenkins
	I0830 21:22:51.536332  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Checking permissions on dir: /home
	I0830 21:22:51.536347  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0830 21:22:51.536364  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Creating domain...
	I0830 21:22:51.536383  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Skipping /home - not owner
	I0830 21:22:51.537233  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) define libvirt domain using xml: 
	I0830 21:22:51.537261  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) <domain type='kvm'>
	I0830 21:22:51.537275  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   <name>ingress-addon-legacy-306023</name>
	I0830 21:22:51.537296  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   <memory unit='MiB'>4096</memory>
	I0830 21:22:51.537314  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   <vcpu>2</vcpu>
	I0830 21:22:51.537326  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   <features>
	I0830 21:22:51.537339  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <acpi/>
	I0830 21:22:51.537351  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <apic/>
	I0830 21:22:51.537363  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <pae/>
	I0830 21:22:51.537373  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     
	I0830 21:22:51.537407  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   </features>
	I0830 21:22:51.537439  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   <cpu mode='host-passthrough'>
	I0830 21:22:51.537455  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   
	I0830 21:22:51.537475  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   </cpu>
	I0830 21:22:51.537492  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   <os>
	I0830 21:22:51.537511  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <type>hvm</type>
	I0830 21:22:51.537543  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <boot dev='cdrom'/>
	I0830 21:22:51.537562  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <boot dev='hd'/>
	I0830 21:22:51.537579  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <bootmenu enable='no'/>
	I0830 21:22:51.537593  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   </os>
	I0830 21:22:51.537609  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   <devices>
	I0830 21:22:51.537623  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <disk type='file' device='cdrom'>
	I0830 21:22:51.537650  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/boot2docker.iso'/>
	I0830 21:22:51.537674  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <target dev='hdc' bus='scsi'/>
	I0830 21:22:51.537684  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <readonly/>
	I0830 21:22:51.537690  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     </disk>
	I0830 21:22:51.537698  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <disk type='file' device='disk'>
	I0830 21:22:51.537708  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0830 21:22:51.537718  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/ingress-addon-legacy-306023.rawdisk'/>
	I0830 21:22:51.537726  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <target dev='hda' bus='virtio'/>
	I0830 21:22:51.537732  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     </disk>
	I0830 21:22:51.537741  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <interface type='network'>
	I0830 21:22:51.537748  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <source network='mk-ingress-addon-legacy-306023'/>
	I0830 21:22:51.537756  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <model type='virtio'/>
	I0830 21:22:51.537763  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     </interface>
	I0830 21:22:51.537774  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <interface type='network'>
	I0830 21:22:51.537784  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <source network='default'/>
	I0830 21:22:51.537789  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <model type='virtio'/>
	I0830 21:22:51.537796  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     </interface>
	I0830 21:22:51.537808  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <serial type='pty'>
	I0830 21:22:51.537817  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <target port='0'/>
	I0830 21:22:51.537822  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     </serial>
	I0830 21:22:51.537838  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <console type='pty'>
	I0830 21:22:51.537847  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <target type='serial' port='0'/>
	I0830 21:22:51.537853  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     </console>
	I0830 21:22:51.537864  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     <rng model='virtio'>
	I0830 21:22:51.537873  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)       <backend model='random'>/dev/random</backend>
	I0830 21:22:51.537878  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     </rng>
	I0830 21:22:51.537886  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     
	I0830 21:22:51.537891  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)     
	I0830 21:22:51.537900  971113 main.go:141] libmachine: (ingress-addon-legacy-306023)   </devices>
	I0830 21:22:51.537905  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) </domain>
	I0830 21:22:51.537916  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) 
	I0830 21:22:51.542095  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:c0:d8:90 in network default
	I0830 21:22:51.542573  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Ensuring networks are active...
	I0830 21:22:51.542594  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:51.543172  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Ensuring network default is active
	I0830 21:22:51.543480  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Ensuring network mk-ingress-addon-legacy-306023 is active
	I0830 21:22:51.544043  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Getting domain xml...
	I0830 21:22:51.544752  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Creating domain...
	I0830 21:22:52.760884  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Waiting to get IP...
	I0830 21:22:52.761671  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:52.762025  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:22:52.762119  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:52.762044  971153 retry.go:31] will retry after 219.123665ms: waiting for machine to come up
	I0830 21:22:52.982421  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:52.982841  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:22:52.982870  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:52.982781  971153 retry.go:31] will retry after 311.941063ms: waiting for machine to come up
	I0830 21:22:53.296539  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:53.296992  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:22:53.297032  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:53.296929  971153 retry.go:31] will retry after 329.871593ms: waiting for machine to come up
	I0830 21:22:53.628475  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:53.628869  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:22:53.628903  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:53.628819  971153 retry.go:31] will retry after 376.306039ms: waiting for machine to come up
	I0830 21:22:54.006319  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:54.006760  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:22:54.006791  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:54.006715  971153 retry.go:31] will retry after 643.706441ms: waiting for machine to come up
	I0830 21:22:54.651430  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:54.651890  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:22:54.651915  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:54.651845  971153 retry.go:31] will retry after 699.250667ms: waiting for machine to come up
	I0830 21:22:55.352854  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:55.353225  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:22:55.353259  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:55.353208  971153 retry.go:31] will retry after 807.06267ms: waiting for machine to come up
	I0830 21:22:56.161775  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:56.162179  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:22:56.162204  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:56.162148  971153 retry.go:31] will retry after 1.088037435s: waiting for machine to come up
	I0830 21:22:57.251585  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:57.252006  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:22:57.252035  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:57.251950  971153 retry.go:31] will retry after 1.715025159s: waiting for machine to come up
	I0830 21:22:58.968791  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:22:58.969172  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:22:58.969204  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:22:58.969135  971153 retry.go:31] will retry after 2.306619419s: waiting for machine to come up
	I0830 21:23:01.277187  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:01.277724  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:23:01.277755  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:23:01.277658  971153 retry.go:31] will retry after 2.574363683s: waiting for machine to come up
	I0830 21:23:03.853184  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:03.853562  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:23:03.853595  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:23:03.853504  971153 retry.go:31] will retry after 3.25229209s: waiting for machine to come up
	I0830 21:23:07.106975  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:07.107348  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:23:07.107375  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:23:07.107308  971153 retry.go:31] will retry after 4.440950702s: waiting for machine to come up
	I0830 21:23:11.549297  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:11.549733  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find current IP address of domain ingress-addon-legacy-306023 in network mk-ingress-addon-legacy-306023
	I0830 21:23:11.549778  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | I0830 21:23:11.549701  971153 retry.go:31] will retry after 3.923599s: waiting for machine to come up
	I0830 21:23:15.477733  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.478149  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Found IP for machine: 192.168.39.247
	I0830 21:23:15.478182  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has current primary IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.478195  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Reserving static IP address...
	I0830 21:23:15.478544  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-306023", mac: "52:54:00:93:ae:5e", ip: "192.168.39.247"} in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.550362  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Getting to WaitForSSH function...
	I0830 21:23:15.550396  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Reserved static IP address: 192.168.39.247
	I0830 21:23:15.550410  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Waiting for SSH to be available...
	I0830 21:23:15.552781  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.553127  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:minikube Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:15.553167  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.553271  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Using SSH client type: external
	I0830 21:23:15.553302  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/id_rsa (-rw-------)
	I0830 21:23:15.553349  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.247 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 21:23:15.553374  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | About to run SSH command:
	I0830 21:23:15.553390  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | exit 0
	I0830 21:23:15.639509  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | SSH cmd err, output: <nil>: 
	I0830 21:23:15.639785  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) KVM machine creation complete!
	I0830 21:23:15.640171  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetConfigRaw
	I0830 21:23:15.640699  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .DriverName
	I0830 21:23:15.640876  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .DriverName
	I0830 21:23:15.640999  971113 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0830 21:23:15.641012  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetState
	I0830 21:23:15.642382  971113 main.go:141] libmachine: Detecting operating system of created instance...
	I0830 21:23:15.642400  971113 main.go:141] libmachine: Waiting for SSH to be available...
	I0830 21:23:15.642411  971113 main.go:141] libmachine: Getting to WaitForSSH function...
	I0830 21:23:15.642421  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:15.644296  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.644609  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:15.644639  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.644785  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:15.644980  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:15.645161  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:15.645321  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:15.645481  971113 main.go:141] libmachine: Using SSH client type: native
	I0830 21:23:15.646124  971113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0830 21:23:15.646147  971113 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0830 21:23:15.754543  971113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:23:15.754563  971113 main.go:141] libmachine: Detecting the provisioner...
	I0830 21:23:15.754571  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:15.757102  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.757456  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:15.757496  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.757604  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:15.757797  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:15.757971  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:15.758087  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:15.758280  971113 main.go:141] libmachine: Using SSH client type: native
	I0830 21:23:15.758716  971113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0830 21:23:15.758731  971113 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0830 21:23:15.868853  971113 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0830 21:23:15.868964  971113 main.go:141] libmachine: found compatible host: buildroot
	I0830 21:23:15.868975  971113 main.go:141] libmachine: Provisioning with buildroot...
	I0830 21:23:15.868983  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetMachineName
	I0830 21:23:15.869276  971113 buildroot.go:166] provisioning hostname "ingress-addon-legacy-306023"
	I0830 21:23:15.869311  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetMachineName
	I0830 21:23:15.869477  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:15.872063  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.872445  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:15.872488  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:15.872559  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:15.872739  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:15.872934  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:15.873075  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:15.873234  971113 main.go:141] libmachine: Using SSH client type: native
	I0830 21:23:15.873632  971113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0830 21:23:15.873646  971113 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-306023 && echo "ingress-addon-legacy-306023" | sudo tee /etc/hostname
	I0830 21:23:15.997732  971113 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-306023
	
	I0830 21:23:15.997765  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:16.000439  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.000712  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.000742  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.000912  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:16.001079  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:16.001213  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:16.001322  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:16.001459  971113 main.go:141] libmachine: Using SSH client type: native
	I0830 21:23:16.002057  971113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0830 21:23:16.002080  971113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-306023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-306023/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-306023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:23:16.120559  971113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:23:16.120595  971113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 21:23:16.120629  971113 buildroot.go:174] setting up certificates
	I0830 21:23:16.120640  971113 provision.go:83] configureAuth start
	I0830 21:23:16.120649  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetMachineName
	I0830 21:23:16.120941  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetIP
	I0830 21:23:16.123543  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.123895  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.123928  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.124030  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:16.125952  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.126264  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.126305  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.126356  971113 provision.go:138] copyHostCerts
	I0830 21:23:16.126408  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:23:16.126457  971113 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 21:23:16.126474  971113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:23:16.126537  971113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 21:23:16.126649  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:23:16.126669  971113 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 21:23:16.126673  971113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:23:16.126698  971113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 21:23:16.126741  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:23:16.126756  971113 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 21:23:16.126763  971113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:23:16.126781  971113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 21:23:16.126823  971113 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-306023 san=[192.168.39.247 192.168.39.247 localhost 127.0.0.1 minikube ingress-addon-legacy-306023]
	I0830 21:23:16.233989  971113 provision.go:172] copyRemoteCerts
	I0830 21:23:16.234045  971113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:23:16.234079  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:16.236820  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.237143  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.237181  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.237355  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:16.237568  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:16.237742  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:16.237879  971113 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/id_rsa Username:docker}
	I0830 21:23:16.320519  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 21:23:16.320600  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 21:23:16.343408  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 21:23:16.343467  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0830 21:23:16.365597  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 21:23:16.365706  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 21:23:16.388149  971113 provision.go:86] duration metric: configureAuth took 267.495054ms
	I0830 21:23:16.388174  971113 buildroot.go:189] setting minikube options for container-runtime
	I0830 21:23:16.388358  971113 config.go:182] Loaded profile config "ingress-addon-legacy-306023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0830 21:23:16.388437  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:16.390930  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.391264  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.391295  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.391501  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:16.391716  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:16.391896  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:16.392085  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:16.392260  971113 main.go:141] libmachine: Using SSH client type: native
	I0830 21:23:16.392667  971113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0830 21:23:16.392683  971113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:23:16.698717  971113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:23:16.698764  971113 main.go:141] libmachine: Checking connection to Docker...
	I0830 21:23:16.698779  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetURL
	I0830 21:23:16.700133  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Using libvirt version 6000000
	I0830 21:23:16.702276  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.702605  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.702642  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.702780  971113 main.go:141] libmachine: Docker is up and running!
	I0830 21:23:16.702796  971113 main.go:141] libmachine: Reticulating splines...
	I0830 21:23:16.702804  971113 client.go:171] LocalClient.Create took 25.865123476s
	I0830 21:23:16.702852  971113 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-306023" took 25.86521155s
	I0830 21:23:16.702864  971113 start.go:300] post-start starting for "ingress-addon-legacy-306023" (driver="kvm2")
	I0830 21:23:16.702877  971113 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:23:16.702905  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .DriverName
	I0830 21:23:16.703134  971113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:23:16.703161  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:16.705154  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.705444  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.705474  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.705568  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:16.705750  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:16.705894  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:16.706014  971113 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/id_rsa Username:docker}
	I0830 21:23:16.789007  971113 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:23:16.793192  971113 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 21:23:16.793216  971113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 21:23:16.793287  971113 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 21:23:16.793366  971113 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 21:23:16.793378  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /etc/ssl/certs/9626212.pem
	I0830 21:23:16.793461  971113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 21:23:16.801449  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:23:16.823407  971113 start.go:303] post-start completed in 120.527437ms
	I0830 21:23:16.823466  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetConfigRaw
	I0830 21:23:16.824113  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetIP
	I0830 21:23:16.826465  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.826801  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.826850  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.827028  971113 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/config.json ...
	I0830 21:23:16.827235  971113 start.go:128] duration metric: createHost completed in 26.008375046s
	I0830 21:23:16.827261  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:16.829363  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.829680  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.829702  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.829832  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:16.829992  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:16.830163  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:16.830298  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:16.830442  971113 main.go:141] libmachine: Using SSH client type: native
	I0830 21:23:16.830833  971113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.247 22 <nil> <nil>}
	I0830 21:23:16.830843  971113 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 21:23:16.940334  971113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693430596.925173608
	
	I0830 21:23:16.940356  971113 fix.go:206] guest clock: 1693430596.925173608
	I0830 21:23:16.940365  971113 fix.go:219] Guest: 2023-08-30 21:23:16.925173608 +0000 UTC Remote: 2023-08-30 21:23:16.827250403 +0000 UTC m=+31.798045716 (delta=97.923205ms)
	I0830 21:23:16.940387  971113 fix.go:190] guest clock delta is within tolerance: 97.923205ms
	I0830 21:23:16.940394  971113 start.go:83] releasing machines lock for "ingress-addon-legacy-306023", held for 26.121670515s
	I0830 21:23:16.940418  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .DriverName
	I0830 21:23:16.940704  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetIP
	I0830 21:23:16.943028  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.943317  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.943355  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.943441  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .DriverName
	I0830 21:23:16.944060  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .DriverName
	I0830 21:23:16.944255  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .DriverName
	I0830 21:23:16.944385  971113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:23:16.944444  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:16.944448  971113 ssh_runner.go:195] Run: cat /version.json
	I0830 21:23:16.944465  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:16.946776  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.947017  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.947057  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.947083  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.947184  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:16.947363  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:16.947382  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:16.947431  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:16.947498  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:16.947581  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:16.947632  971113 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/id_rsa Username:docker}
	I0830 21:23:16.947742  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:16.947901  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:16.948033  971113 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/id_rsa Username:docker}
	I0830 21:23:17.060777  971113 ssh_runner.go:195] Run: systemctl --version
	I0830 21:23:17.066463  971113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:23:17.222102  971113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 21:23:17.228815  971113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 21:23:17.228898  971113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:23:17.243488  971113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 21:23:17.243521  971113 start.go:466] detecting cgroup driver to use...
	I0830 21:23:17.243605  971113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:23:17.257042  971113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:23:17.269280  971113 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:23:17.269338  971113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:23:17.281596  971113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:23:17.293611  971113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:23:17.392606  971113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:23:17.505312  971113 docker.go:212] disabling docker service ...
	I0830 21:23:17.505403  971113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:23:17.518007  971113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:23:17.529319  971113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:23:17.630700  971113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:23:17.728616  971113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:23:17.741367  971113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:23:17.757872  971113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0830 21:23:17.757952  971113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:23:17.766893  971113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:23:17.766963  971113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:23:17.775718  971113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:23:17.785383  971113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:23:17.793763  971113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:23:17.802711  971113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:23:17.810312  971113 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 21:23:17.810373  971113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 21:23:17.822383  971113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:23:17.830192  971113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:23:17.932998  971113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:23:18.127151  971113 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:23:18.127243  971113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:23:18.132339  971113 start.go:534] Will wait 60s for crictl version
	I0830 21:23:18.132397  971113 ssh_runner.go:195] Run: which crictl
	I0830 21:23:18.136788  971113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:23:18.177897  971113 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 21:23:18.178013  971113 ssh_runner.go:195] Run: crio --version
	I0830 21:23:18.220291  971113 ssh_runner.go:195] Run: crio --version
	I0830 21:23:18.271588  971113 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0830 21:23:18.273131  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetIP
	I0830 21:23:18.276157  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:18.276524  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:18.276553  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:18.276807  971113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 21:23:18.280743  971113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:23:18.292970  971113 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0830 21:23:18.293040  971113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:23:18.322218  971113 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0830 21:23:18.322317  971113 ssh_runner.go:195] Run: which lz4
	I0830 21:23:18.326072  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0830 21:23:18.326192  971113 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 21:23:18.330269  971113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 21:23:18.330298  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0830 21:23:20.309281  971113 crio.go:444] Took 1.983129 seconds to copy over tarball
	I0830 21:23:20.309380  971113 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 21:23:23.304796  971113 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.995381822s)
	I0830 21:23:23.304822  971113 crio.go:451] Took 2.995516 seconds to extract the tarball
	I0830 21:23:23.304832  971113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 21:23:23.347127  971113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:23:23.400300  971113 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0830 21:23:23.400330  971113 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 21:23:23.400415  971113 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:23:23.400437  971113 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0830 21:23:23.400451  971113 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0830 21:23:23.400470  971113 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0830 21:23:23.400494  971113 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0830 21:23:23.400563  971113 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0830 21:23:23.400426  971113 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0830 21:23:23.400676  971113 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 21:23:23.401735  971113 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0830 21:23:23.401800  971113 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0830 21:23:23.401823  971113 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:23:23.401832  971113 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 21:23:23.401801  971113 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0830 21:23:23.401852  971113 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0830 21:23:23.401801  971113 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0830 21:23:23.401805  971113 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0830 21:23:23.575292  971113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0830 21:23:23.581030  971113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 21:23:23.582113  971113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0830 21:23:23.583687  971113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0830 21:23:23.603444  971113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0830 21:23:23.604492  971113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0830 21:23:23.630805  971113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0830 21:23:23.673536  971113 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0830 21:23:23.673593  971113 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0830 21:23:23.673645  971113 ssh_runner.go:195] Run: which crictl
	I0830 21:23:23.683213  971113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:23:23.700857  971113 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0830 21:23:23.700898  971113 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 21:23:23.700939  971113 ssh_runner.go:195] Run: which crictl
	I0830 21:23:23.718489  971113 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0830 21:23:23.718528  971113 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0830 21:23:23.718582  971113 ssh_runner.go:195] Run: which crictl
	I0830 21:23:23.756200  971113 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0830 21:23:23.756253  971113 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0830 21:23:23.756310  971113 ssh_runner.go:195] Run: which crictl
	I0830 21:23:23.769959  971113 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0830 21:23:23.769997  971113 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0830 21:23:23.770036  971113 ssh_runner.go:195] Run: which crictl
	I0830 21:23:23.783466  971113 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0830 21:23:23.783519  971113 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0830 21:23:23.783586  971113 ssh_runner.go:195] Run: which crictl
	I0830 21:23:23.789983  971113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0830 21:23:23.792059  971113 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0830 21:23:23.792098  971113 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0830 21:23:23.792139  971113 ssh_runner.go:195] Run: which crictl
	I0830 21:23:23.904921  971113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 21:23:23.904958  971113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0830 21:23:23.904988  971113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0830 21:23:23.905029  971113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0830 21:23:23.905070  971113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0830 21:23:23.905130  971113 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0830 21:23:23.905153  971113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0830 21:23:24.017931  971113 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0830 21:23:24.024108  971113 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0830 21:23:24.024192  971113 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0830 21:23:24.024351  971113 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0830 21:23:24.024412  971113 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0830 21:23:24.024423  971113 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0830 21:23:24.024468  971113 cache_images.go:92] LoadImages completed in 624.12148ms
	W0830 21:23:24.024565  971113 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0830 21:23:24.024647  971113 ssh_runner.go:195] Run: crio config
	I0830 21:23:24.090317  971113 cni.go:84] Creating CNI manager for ""
	I0830 21:23:24.090341  971113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 21:23:24.090361  971113 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:23:24.090381  971113 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.247 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-306023 NodeName:ingress-addon-legacy-306023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.247"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.247 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0830 21:23:24.090512  971113 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.247
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-306023"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.247
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.247"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:23:24.090581  971113 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-306023 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.247
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-306023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 21:23:24.090641  971113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0830 21:23:24.101596  971113 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 21:23:24.101705  971113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 21:23:24.111645  971113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0830 21:23:24.127543  971113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0830 21:23:24.143586  971113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0830 21:23:24.159642  971113 ssh_runner.go:195] Run: grep 192.168.39.247	control-plane.minikube.internal$ /etc/hosts
	I0830 21:23:24.163299  971113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.247	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:23:24.174552  971113 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023 for IP: 192.168.39.247
	I0830 21:23:24.174579  971113 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:23:24.174754  971113 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 21:23:24.174795  971113 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 21:23:24.174866  971113 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.key
	I0830 21:23:24.174880  971113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt with IP's: []
	I0830 21:23:24.262368  971113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt ...
	I0830 21:23:24.262405  971113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: {Name:mk275799322336e1b122307365bf1c0cbacda0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:23:24.262599  971113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.key ...
	I0830 21:23:24.262613  971113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.key: {Name:mkb2c5f6d2ec58071a26ce4916493e727ec38719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:23:24.262729  971113 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.key.890e8c75
	I0830 21:23:24.262746  971113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.crt.890e8c75 with IP's: [192.168.39.247 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 21:23:24.469033  971113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.crt.890e8c75 ...
	I0830 21:23:24.469065  971113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.crt.890e8c75: {Name:mk5638284fe8ddb6c4a52966af02a932fa357c3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:23:24.469247  971113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.key.890e8c75 ...
	I0830 21:23:24.469269  971113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.key.890e8c75: {Name:mk9c4a1b78a7318bc5ee1695ca6e63ea724af282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:23:24.469361  971113 certs.go:337] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.crt.890e8c75 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.crt
	I0830 21:23:24.469451  971113 certs.go:341] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.key.890e8c75 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.key
	I0830 21:23:24.469505  971113 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/proxy-client.key
	I0830 21:23:24.469521  971113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/proxy-client.crt with IP's: []
	I0830 21:23:24.611457  971113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/proxy-client.crt ...
	I0830 21:23:24.611492  971113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/proxy-client.crt: {Name:mk8e360fa4a1fe7b093f73310aa7cfd1ca970bd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:23:24.611675  971113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/proxy-client.key ...
	I0830 21:23:24.611689  971113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/proxy-client.key: {Name:mk105f9ae0f26586c601b67712caeb3a03345f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:23:24.611807  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0830 21:23:24.611826  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0830 21:23:24.611836  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0830 21:23:24.611845  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0830 21:23:24.611857  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 21:23:24.611867  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 21:23:24.611879  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 21:23:24.611889  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 21:23:24.611937  971113 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 21:23:24.611970  971113 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 21:23:24.611978  971113 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 21:23:24.612026  971113 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 21:23:24.612059  971113 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:23:24.612085  971113 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 21:23:24.612129  971113 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:23:24.612155  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:23:24.612167  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem -> /usr/share/ca-certificates/962621.pem
	I0830 21:23:24.612176  971113 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /usr/share/ca-certificates/9626212.pem
	I0830 21:23:24.612797  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 21:23:24.637482  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0830 21:23:24.658841  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 21:23:24.680124  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0830 21:23:24.700749  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:23:24.722308  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:23:24.743264  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:23:24.765901  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 21:23:24.787211  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:23:24.808127  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 21:23:24.828560  971113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 21:23:24.849235  971113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 21:23:24.864409  971113 ssh_runner.go:195] Run: openssl version
	I0830 21:23:24.869875  971113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 21:23:24.880439  971113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 21:23:24.885135  971113 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:23:24.885202  971113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 21:23:24.891092  971113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 21:23:24.901417  971113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 21:23:24.911777  971113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 21:23:24.916482  971113 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:23:24.916538  971113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 21:23:24.921766  971113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 21:23:24.932129  971113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:23:24.942515  971113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:23:24.947199  971113 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:23:24.947246  971113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:23:24.952622  971113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:23:24.963127  971113 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:23:24.967244  971113 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:23:24.967296  971113 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-306023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-306023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:23:24.967396  971113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 21:23:24.967444  971113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:23:24.997055  971113 cri.go:89] found id: ""
	I0830 21:23:24.997188  971113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 21:23:25.007299  971113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 21:23:25.016667  971113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 21:23:25.026029  971113 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 21:23:25.026086  971113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0830 21:23:25.083817  971113 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0830 21:23:25.200093  971113 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 21:23:25.211141  971113 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 21:23:25.211311  971113 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 21:23:25.211403  971113 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 21:23:25.381890  971113 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 21:23:25.382643  971113 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 21:23:25.382856  971113 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 21:23:25.515036  971113 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 21:23:25.636799  971113 out.go:204]   - Generating certificates and keys ...
	I0830 21:23:25.636958  971113 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 21:23:25.637086  971113 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 21:23:25.637194  971113 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 21:23:25.762432  971113 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 21:23:26.012756  971113 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 21:23:26.092461  971113 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 21:23:26.327871  971113 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 21:23:26.328123  971113 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-306023 localhost] and IPs [192.168.39.247 127.0.0.1 ::1]
	I0830 21:23:26.475288  971113 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 21:23:26.475479  971113 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-306023 localhost] and IPs [192.168.39.247 127.0.0.1 ::1]
	I0830 21:23:26.686021  971113 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 21:23:27.073604  971113 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 21:23:27.240732  971113 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 21:23:27.241226  971113 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 21:23:27.412891  971113 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 21:23:27.616489  971113 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 21:23:27.853607  971113 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 21:23:28.092642  971113 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 21:23:28.093597  971113 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 21:23:28.095619  971113 out.go:204]   - Booting up control plane ...
	I0830 21:23:28.095750  971113 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 21:23:28.101325  971113 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 21:23:28.102578  971113 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 21:23:28.103498  971113 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 21:23:28.105681  971113 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 21:23:36.608506  971113 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502540 seconds
	I0830 21:23:36.608668  971113 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 21:23:36.622455  971113 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 21:23:37.145451  971113 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 21:23:37.145633  971113 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-306023 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0830 21:23:37.655992  971113 kubeadm.go:322] [bootstrap-token] Using token: 9oxy1c.tdbqo223jdrt158u
	I0830 21:23:37.657990  971113 out.go:204]   - Configuring RBAC rules ...
	I0830 21:23:37.658100  971113 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 21:23:37.669796  971113 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 21:23:37.693912  971113 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 21:23:37.699373  971113 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 21:23:37.703044  971113 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 21:23:37.708925  971113 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 21:23:37.718775  971113 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 21:23:38.027194  971113 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 21:23:38.105714  971113 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 21:23:38.108027  971113 kubeadm.go:322] 
	I0830 21:23:38.108133  971113 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 21:23:38.108190  971113 kubeadm.go:322] 
	I0830 21:23:38.108318  971113 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 21:23:38.108333  971113 kubeadm.go:322] 
	I0830 21:23:38.108364  971113 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 21:23:38.108451  971113 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 21:23:38.108550  971113 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 21:23:38.108572  971113 kubeadm.go:322] 
	I0830 21:23:38.108643  971113 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 21:23:38.108756  971113 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 21:23:38.108846  971113 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 21:23:38.108860  971113 kubeadm.go:322] 
	I0830 21:23:38.108960  971113 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 21:23:38.109063  971113 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 21:23:38.109078  971113 kubeadm.go:322] 
	I0830 21:23:38.109179  971113 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9oxy1c.tdbqo223jdrt158u \
	I0830 21:23:38.109327  971113 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 21:23:38.109374  971113 kubeadm.go:322]     --control-plane 
	I0830 21:23:38.109396  971113 kubeadm.go:322] 
	I0830 21:23:38.109504  971113 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 21:23:38.109513  971113 kubeadm.go:322] 
	I0830 21:23:38.109629  971113 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9oxy1c.tdbqo223jdrt158u \
	I0830 21:23:38.109786  971113 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 21:23:38.110037  971113 kubeadm.go:322] W0830 21:23:25.076468     966 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0830 21:23:38.110141  971113 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 21:23:38.110301  971113 kubeadm.go:322] W0830 21:23:28.096012     966 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0830 21:23:38.110453  971113 kubeadm.go:322] W0830 21:23:28.097444     966 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0830 21:23:38.110488  971113 cni.go:84] Creating CNI manager for ""
	I0830 21:23:38.110507  971113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 21:23:38.112449  971113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 21:23:38.113847  971113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 21:23:38.123256  971113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 21:23:38.144038  971113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 21:23:38.144090  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:38.144090  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=ingress-addon-legacy-306023 minikube.k8s.io/updated_at=2023_08_30T21_23_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:38.333157  971113 ops.go:34] apiserver oom_adj: -16
	I0830 21:23:38.333359  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:38.483351  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:39.183621  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:39.683895  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:40.183869  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:40.683892  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:41.183494  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:41.683897  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:42.182994  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:42.683852  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:43.183898  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:43.683889  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:44.183877  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:44.683270  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:45.183876  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:45.683219  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:46.183157  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:46.683268  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:47.183051  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:47.683111  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:48.183687  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:48.683345  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:49.183894  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:49.683308  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:50.183586  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:50.683831  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:51.183877  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:51.683009  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:52.183394  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:52.683033  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:53.183544  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:53.683973  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:54.183091  971113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:23:54.292337  971113 kubeadm.go:1081] duration metric: took 16.148311615s to wait for elevateKubeSystemPrivileges.
	I0830 21:23:54.292430  971113 kubeadm.go:406] StartCluster complete in 29.325138299s
	I0830 21:23:54.292484  971113 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:23:54.292571  971113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:23:54.293529  971113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:23:54.293788  971113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 21:23:54.293984  971113 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 21:23:54.294098  971113 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-306023"
	I0830 21:23:54.294116  971113 config.go:182] Loaded profile config "ingress-addon-legacy-306023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0830 21:23:54.294113  971113 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-306023"
	I0830 21:23:54.294163  971113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-306023"
	I0830 21:23:54.294119  971113 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-306023"
	I0830 21:23:54.294313  971113 host.go:66] Checking if "ingress-addon-legacy-306023" exists ...
	I0830 21:23:54.294643  971113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:23:54.294690  971113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:23:54.294695  971113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:23:54.294732  971113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:23:54.294667  971113 kapi.go:59] client config for ingress-addon-legacy-306023: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:23:54.295669  971113 cert_rotation.go:137] Starting client certificate rotation controller
	I0830 21:23:54.310360  971113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38261
	I0830 21:23:54.310774  971113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0830 21:23:54.310968  971113 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:23:54.311107  971113 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:23:54.311523  971113 main.go:141] libmachine: Using API Version  1
	I0830 21:23:54.311550  971113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:23:54.311641  971113 main.go:141] libmachine: Using API Version  1
	I0830 21:23:54.311667  971113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:23:54.311994  971113 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:23:54.311998  971113 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:23:54.312167  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetState
	I0830 21:23:54.312613  971113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:23:54.312660  971113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:23:54.314498  971113 kapi.go:59] client config for ingress-addon-legacy-306023: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:23:54.318877  971113 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-306023"
	I0830 21:23:54.318919  971113 host.go:66] Checking if "ingress-addon-legacy-306023" exists ...
	I0830 21:23:54.319289  971113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:23:54.319332  971113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:23:54.327108  971113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I0830 21:23:54.327526  971113 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:23:54.328033  971113 main.go:141] libmachine: Using API Version  1
	I0830 21:23:54.328058  971113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:23:54.328410  971113 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:23:54.328614  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetState
	I0830 21:23:54.330228  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .DriverName
	I0830 21:23:54.332309  971113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:23:54.333715  971113 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:23:54.333736  971113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 21:23:54.333758  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:54.335320  971113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0830 21:23:54.335737  971113 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:23:54.336320  971113 main.go:141] libmachine: Using API Version  1
	I0830 21:23:54.336346  971113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:23:54.336709  971113 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:23:54.337061  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:54.337212  971113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:23:54.337257  971113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:23:54.337561  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:54.337593  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:54.337801  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:54.337977  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:54.338123  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:54.338270  971113 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/id_rsa Username:docker}
	I0830 21:23:54.351607  971113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0830 21:23:54.352051  971113 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:23:54.352559  971113 main.go:141] libmachine: Using API Version  1
	I0830 21:23:54.352581  971113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:23:54.352884  971113 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:23:54.353043  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetState
	I0830 21:23:54.354470  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .DriverName
	I0830 21:23:54.354698  971113 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 21:23:54.354713  971113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 21:23:54.354727  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHHostname
	I0830 21:23:54.357489  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:54.357891  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:ae:5e", ip: ""} in network mk-ingress-addon-legacy-306023: {Iface:virbr1 ExpiryTime:2023-08-30 22:23:06 +0000 UTC Type:0 Mac:52:54:00:93:ae:5e Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ingress-addon-legacy-306023 Clientid:01:52:54:00:93:ae:5e}
	I0830 21:23:54.357942  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | domain ingress-addon-legacy-306023 has defined IP address 192.168.39.247 and MAC address 52:54:00:93:ae:5e in network mk-ingress-addon-legacy-306023
	I0830 21:23:54.358085  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHPort
	I0830 21:23:54.358274  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHKeyPath
	I0830 21:23:54.358459  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .GetSSHUsername
	I0830 21:23:54.358619  971113 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/ingress-addon-legacy-306023/id_rsa Username:docker}
	I0830 21:23:54.367252  971113 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-306023" context rescaled to 1 replicas
	I0830 21:23:54.367290  971113 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.247 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:23:54.368927  971113 out.go:177] * Verifying Kubernetes components...
	I0830 21:23:54.370656  971113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:23:54.490546  971113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:23:54.531859  971113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 21:23:54.532705  971113 kapi.go:59] client config for ingress-addon-legacy-306023: &rest.Config{Host:"https://192.168.39.247:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:23:54.533083  971113 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-306023" to be "Ready" ...
	I0830 21:23:54.537131  971113 node_ready.go:49] node "ingress-addon-legacy-306023" has status "Ready":"True"
	I0830 21:23:54.537157  971113 node_ready.go:38] duration metric: took 4.043351ms waiting for node "ingress-addon-legacy-306023" to be "Ready" ...
	I0830 21:23:54.537170  971113 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:23:54.539445  971113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 21:23:54.547093  971113 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-8rqch" in "kube-system" namespace to be "Ready" ...
	I0830 21:23:55.094344  971113 main.go:141] libmachine: Making call to close driver server
	I0830 21:23:55.094377  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .Close
	I0830 21:23:55.094396  971113 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0830 21:23:55.094475  971113 main.go:141] libmachine: Making call to close driver server
	I0830 21:23:55.094493  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .Close
	I0830 21:23:55.094690  971113 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:23:55.094712  971113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:23:55.094723  971113 main.go:141] libmachine: Making call to close driver server
	I0830 21:23:55.094732  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .Close
	I0830 21:23:55.094931  971113 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:23:55.094948  971113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:23:55.094957  971113 main.go:141] libmachine: Making call to close driver server
	I0830 21:23:55.094980  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .Close
	I0830 21:23:55.095111  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Closing plugin on server side
	I0830 21:23:55.095114  971113 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:23:55.095140  971113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:23:55.096007  971113 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:23:55.096030  971113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:23:55.096032  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Closing plugin on server side
	I0830 21:23:55.096044  971113 main.go:141] libmachine: Making call to close driver server
	I0830 21:23:55.096060  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) Calling .Close
	I0830 21:23:55.096311  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Closing plugin on server side
	I0830 21:23:55.096348  971113 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:23:55.096361  971113 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:23:55.098352  971113 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0830 21:23:55.096504  971113 main.go:141] libmachine: (ingress-addon-legacy-306023) DBG | Closing plugin on server side
	I0830 21:23:55.099716  971113 addons.go:502] enable addons completed in 805.763772ms: enabled=[storage-provisioner default-storageclass]
	I0830 21:23:56.580881  971113 pod_ready.go:102] pod "coredns-66bff467f8-8rqch" in "kube-system" namespace has status "Ready":"False"
	I0830 21:23:58.075839  971113 pod_ready.go:92] pod "coredns-66bff467f8-8rqch" in "kube-system" namespace has status "Ready":"True"
	I0830 21:23:58.075866  971113 pod_ready.go:81] duration metric: took 3.528751004s waiting for pod "coredns-66bff467f8-8rqch" in "kube-system" namespace to be "Ready" ...
	I0830 21:23:58.075875  971113 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-grg77" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:00.096412  971113 pod_ready.go:102] pod "coredns-66bff467f8-grg77" in "kube-system" namespace has status "Ready":"False"
	I0830 21:24:02.096576  971113 pod_ready.go:102] pod "coredns-66bff467f8-grg77" in "kube-system" namespace has status "Ready":"False"
	I0830 21:24:04.096908  971113 pod_ready.go:102] pod "coredns-66bff467f8-grg77" in "kube-system" namespace has status "Ready":"False"
	I0830 21:24:06.097549  971113 pod_ready.go:102] pod "coredns-66bff467f8-grg77" in "kube-system" namespace has status "Ready":"False"
	I0830 21:24:08.592519  971113 pod_ready.go:97] error getting pod "coredns-66bff467f8-grg77" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-grg77" not found
	I0830 21:24:08.592552  971113 pod_ready.go:81] duration metric: took 10.516671075s waiting for pod "coredns-66bff467f8-grg77" in "kube-system" namespace to be "Ready" ...
	E0830 21:24:08.592567  971113 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-grg77" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-grg77" not found
	I0830 21:24:08.592576  971113 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-306023" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:08.598236  971113 pod_ready.go:92] pod "etcd-ingress-addon-legacy-306023" in "kube-system" namespace has status "Ready":"True"
	I0830 21:24:08.598256  971113 pod_ready.go:81] duration metric: took 5.67155ms waiting for pod "etcd-ingress-addon-legacy-306023" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:08.598271  971113 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-306023" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:08.602848  971113 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-306023" in "kube-system" namespace has status "Ready":"True"
	I0830 21:24:08.602864  971113 pod_ready.go:81] duration metric: took 4.585822ms waiting for pod "kube-apiserver-ingress-addon-legacy-306023" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:08.602878  971113 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-306023" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:08.607139  971113 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-306023" in "kube-system" namespace has status "Ready":"True"
	I0830 21:24:08.607158  971113 pod_ready.go:81] duration metric: took 4.272257ms waiting for pod "kube-controller-manager-ingress-addon-legacy-306023" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:08.607169  971113 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j8947" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:08.611725  971113 pod_ready.go:92] pod "kube-proxy-j8947" in "kube-system" namespace has status "Ready":"True"
	I0830 21:24:08.611739  971113 pod_ready.go:81] duration metric: took 4.564608ms waiting for pod "kube-proxy-j8947" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:08.611745  971113 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-306023" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:08.790000  971113 request.go:629] Waited for 176.248172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes/ingress-addon-legacy-306023
	I0830 21:24:08.794501  971113 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-306023" in "kube-system" namespace has status "Ready":"True"
	I0830 21:24:08.794526  971113 pod_ready.go:81] duration metric: took 182.773818ms waiting for pod "kube-scheduler-ingress-addon-legacy-306023" in "kube-system" namespace to be "Ready" ...
	I0830 21:24:08.794540  971113 pod_ready.go:38] duration metric: took 14.257353485s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:24:08.794563  971113 api_server.go:52] waiting for apiserver process to appear ...
	I0830 21:24:08.794631  971113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:24:08.808798  971113 api_server.go:72] duration metric: took 14.441462487s to wait for apiserver process to appear ...
	I0830 21:24:08.808826  971113 api_server.go:88] waiting for apiserver healthz status ...
	I0830 21:24:08.808845  971113 api_server.go:253] Checking apiserver healthz at https://192.168.39.247:8443/healthz ...
	I0830 21:24:08.814502  971113 api_server.go:279] https://192.168.39.247:8443/healthz returned 200:
	ok
	I0830 21:24:08.815392  971113 api_server.go:141] control plane version: v1.18.20
	I0830 21:24:08.815414  971113 api_server.go:131] duration metric: took 6.580581ms to wait for apiserver health ...
	I0830 21:24:08.815423  971113 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 21:24:08.989810  971113 request.go:629] Waited for 174.30914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I0830 21:24:08.996784  971113 system_pods.go:59] 7 kube-system pods found
	I0830 21:24:08.996826  971113 system_pods.go:61] "coredns-66bff467f8-8rqch" [c94d8d01-917b-406a-bda5-66947ab04669] Running
	I0830 21:24:08.996835  971113 system_pods.go:61] "etcd-ingress-addon-legacy-306023" [03baa364-253d-4299-a315-673ccee54a6c] Running
	I0830 21:24:08.996846  971113 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-306023" [bcff0e41-be7e-4237-a83b-579a998652a4] Running
	I0830 21:24:08.996852  971113 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-306023" [8287a812-1895-4d4b-a107-540078a2e404] Running
	I0830 21:24:08.996858  971113 system_pods.go:61] "kube-proxy-j8947" [dfea0eb2-3aef-4cea-9936-dd9728c6fd03] Running
	I0830 21:24:08.996863  971113 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-306023" [6d952580-91e5-4ac2-aacc-ef902db53c99] Running
	I0830 21:24:08.996869  971113 system_pods.go:61] "storage-provisioner" [778e7cba-fb5b-46ba-a98d-9f56f8cd9d81] Running
	I0830 21:24:08.996877  971113 system_pods.go:74] duration metric: took 181.446939ms to wait for pod list to return data ...
	I0830 21:24:08.996889  971113 default_sa.go:34] waiting for default service account to be created ...
	I0830 21:24:09.190367  971113 request.go:629] Waited for 193.368061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/default/serviceaccounts
	I0830 21:24:09.194672  971113 default_sa.go:45] found service account: "default"
	I0830 21:24:09.194700  971113 default_sa.go:55] duration metric: took 197.804972ms for default service account to be created ...
	I0830 21:24:09.194714  971113 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 21:24:09.390169  971113 request.go:629] Waited for 195.369755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/namespaces/kube-system/pods
	I0830 21:24:09.396672  971113 system_pods.go:86] 7 kube-system pods found
	I0830 21:24:09.396706  971113 system_pods.go:89] "coredns-66bff467f8-8rqch" [c94d8d01-917b-406a-bda5-66947ab04669] Running
	I0830 21:24:09.396714  971113 system_pods.go:89] "etcd-ingress-addon-legacy-306023" [03baa364-253d-4299-a315-673ccee54a6c] Running
	I0830 21:24:09.396722  971113 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-306023" [bcff0e41-be7e-4237-a83b-579a998652a4] Running
	I0830 21:24:09.396729  971113 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-306023" [8287a812-1895-4d4b-a107-540078a2e404] Running
	I0830 21:24:09.396735  971113 system_pods.go:89] "kube-proxy-j8947" [dfea0eb2-3aef-4cea-9936-dd9728c6fd03] Running
	I0830 21:24:09.396742  971113 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-306023" [6d952580-91e5-4ac2-aacc-ef902db53c99] Running
	I0830 21:24:09.396748  971113 system_pods.go:89] "storage-provisioner" [778e7cba-fb5b-46ba-a98d-9f56f8cd9d81] Running
	I0830 21:24:09.396760  971113 system_pods.go:126] duration metric: took 202.04008ms to wait for k8s-apps to be running ...
	I0830 21:24:09.396770  971113 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 21:24:09.396829  971113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:24:09.411252  971113 system_svc.go:56] duration metric: took 14.474014ms WaitForService to wait for kubelet.
	I0830 21:24:09.411278  971113 kubeadm.go:581] duration metric: took 15.043945584s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 21:24:09.411299  971113 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:24:09.590730  971113 request.go:629] Waited for 179.351275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.247:8443/api/v1/nodes
	I0830 21:24:09.594828  971113 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:24:09.594866  971113 node_conditions.go:123] node cpu capacity is 2
	I0830 21:24:09.594877  971113 node_conditions.go:105] duration metric: took 183.57352ms to run NodePressure ...
	I0830 21:24:09.594888  971113 start.go:228] waiting for startup goroutines ...
	I0830 21:24:09.594894  971113 start.go:233] waiting for cluster config update ...
	I0830 21:24:09.594908  971113 start.go:242] writing updated cluster config ...
	I0830 21:24:09.595190  971113 ssh_runner.go:195] Run: rm -f paused
	I0830 21:24:09.645694  971113 start.go:600] kubectl: 1.28.1, cluster: 1.18.20 (minor skew: 10)
	I0830 21:24:09.647786  971113 out.go:177] 
	W0830 21:24:09.649416  971113 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0830 21:24:09.650787  971113 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0830 21:24:09.652185  971113 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-306023" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 21:23:03 UTC, ends at Wed 2023-08-30 21:27:05 UTC. --
	Aug 30 21:27:04 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:04.806948848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8f62be482b031a42dd8d3d997ba7903289647ab868e6048dbfd447a9d32ffa3,PodSandboxId:44a1a5d37d9fa35aff37a5a9c7fbccc23a6ccd3dad1f5ba896d081cd4dceed67,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430816070807946,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-scrwn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34aeae27-363f-4168-8e76-5949435b6f85,},Annotations:map[string]string{io.kubernetes.container.hash: acf072fd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b629cc3256e65854610795b25a79c7197d5d20cdcd8857b83c60db6ce737b,PodSandboxId:3c2e945b1feee547f1bb6bf4c665eaa6be68eed816cfe19331731035271f1289,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693430676268716471,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3dacb2e-a4c3-414a-9e73-713b175fc41f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9b63e2b9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c597752cb22a65652d1b60aab2b6e5a26d207dbf4d8dc18a7083b37ab8b57cb,PodSandboxId:bc16a9c6dbd2740f607a3671e033cae6553739166202e10a1a15ce8e33596fcc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1693430661640545499,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rhn4t,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d613e907-86be-4145-995b-7df1d1299386,},Annotations:map[string]string{io.kubernetes.container.hash: a58d8250,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:489883ee1e131e0626f5a5bfecaa890ab280aab756cd2faefbc018d3e6155495,PodSandboxId:5c5160b7dc7446cd6b9be07d0bc529ddc8798d30309c7b2dc5081db210928886,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652821562046,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j46hp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9448b898-17df-4119-ade4-fbd067c79fa6,},Annotations:map[string]string{io.kubernetes.container.hash: f8cae9ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce64cd316ef18f1bd9dabf8c80f5b3ee7fffa35d70c1eb5a1ec0632eb0037045,PodSandboxId:36fbb9559447537885f71a31724236090b433e5a7fe092f8e7e4065a93874e09,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652632665698,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cljv7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1d2ebe77-43ac-4e73-9d6a-bc49937a2bde,},Annotations:map[string]string{io.kubernetes.container.hash: 18edd2ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c75ce32767a7e73b6ed17e2f2aac3d91871d176d1459d0a61cf1f2ddbc8d3,PodSandboxId:27e9c427fa68a6f99f2f8a8406cdd476b8438adce8384012538d44a8e2386205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1693430636222575760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rqch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94d8d01-917b-406a-bda5-66947ab04669,},Annotations:map[string]string{io.kubernetes.container.hash: f3e01b84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada0722e293eb2414267404ea9e
c0e91f748d2f2a41d42b9f7b7d33a573c42a1,PodSandboxId:69a07bea0971c074fa228fcdf8b78baef8994a5f2a51e88bd86a980a85ca290b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693430635842713575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778e7cba-fb5b-46ba-a98d-9f56f8cd9d81,},Annotations:map[string]string{io.kubernetes.container.hash: ab904cb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed1c9414dad7d78e22f2d94bea
3de0a80a9071d9fa758ac89823bf6ad0076a,PodSandboxId:c44f8063825a4ee5ccf2113387cbc9d58e856da198e3b6e2d7bf1f0868b332f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1693430635316195674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j8947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfea0eb2-3aef-4cea-9936-dd9728c6fd03,},Annotations:map[string]string{io.kubernetes.container.hash: 86681f33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2afbb039ab222a4caa5cbf5f6e037f1f89af1249c3de7d068146b3b984c6f4,Pod
SandboxId:4e7b8aab7d12cd306aea0a59543c2ddccb10243322343a36a0e72fe26e70ead4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1693430611193409544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d294652a85c93f4ec83cca4c7df618ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9ab49a94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffb0051544b6926d80244d77f7780d055695db6b1f53c2ebc0919b1e3f4df77,PodSandboxId:3d3e9b0c6087c559b62d6f076bf126bf5aca
a612a2ab92043a04456ee3e9476f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1693430610388428775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b5dc57e93af736887993160a148f68806250fa030e8a1bd1172051b4adb94f,PodSandboxId:7ec585852a6209c409ecebdb18c9c9383e585be985
f07e5f851c68e662e23d6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1693430610009005813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0471c075deea5e6792790f77d6b4917179a87d50ea4933cd1d57bb87a78ac9,PodSandboxId:cb0087632dc3
67370f9719f73a247bd816089da189459a1a0eaf63b23c75610e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1693430609925536697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39521cdcdb4f2757ccf9ad6c8f246533,},Annotations:map[string]string{io.kubernetes.container.hash: c850525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=920f4b7b-351b-4e40-8d22-7667a694b1c7 name=/runtime.v1alpha2.Runti
meService/ListContainers
	Aug 30 21:27:04 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:04.913726263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ce930cec-fb1b-423c-b394-870e5cacdc57 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:04 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:04.913790247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ce930cec-fb1b-423c-b394-870e5cacdc57 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:04 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:04.914051448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8f62be482b031a42dd8d3d997ba7903289647ab868e6048dbfd447a9d32ffa3,PodSandboxId:44a1a5d37d9fa35aff37a5a9c7fbccc23a6ccd3dad1f5ba896d081cd4dceed67,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430816070807946,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-scrwn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34aeae27-363f-4168-8e76-5949435b6f85,},Annotations:map[string]string{io.kubernetes.container.hash: acf072fd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b629cc3256e65854610795b25a79c7197d5d20cdcd8857b83c60db6ce737b,PodSandboxId:3c2e945b1feee547f1bb6bf4c665eaa6be68eed816cfe19331731035271f1289,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693430676268716471,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3dacb2e-a4c3-414a-9e73-713b175fc41f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9b63e2b9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c597752cb22a65652d1b60aab2b6e5a26d207dbf4d8dc18a7083b37ab8b57cb,PodSandboxId:bc16a9c6dbd2740f607a3671e033cae6553739166202e10a1a15ce8e33596fcc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1693430661640545499,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rhn4t,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d613e907-86be-4145-995b-7df1d1299386,},Annotations:map[string]string{io.kubernetes.container.hash: a58d8250,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:489883ee1e131e0626f5a5bfecaa890ab280aab756cd2faefbc018d3e6155495,PodSandboxId:5c5160b7dc7446cd6b9be07d0bc529ddc8798d30309c7b2dc5081db210928886,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652821562046,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j46hp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9448b898-17df-4119-ade4-fbd067c79fa6,},Annotations:map[string]string{io.kubernetes.container.hash: f8cae9ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce64cd316ef18f1bd9dabf8c80f5b3ee7fffa35d70c1eb5a1ec0632eb0037045,PodSandboxId:36fbb9559447537885f71a31724236090b433e5a7fe092f8e7e4065a93874e09,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652632665698,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cljv7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1d2ebe77-43ac-4e73-9d6a-bc49937a2bde,},Annotations:map[string]string{io.kubernetes.container.hash: 18edd2ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c75ce32767a7e73b6ed17e2f2aac3d91871d176d1459d0a61cf1f2ddbc8d3,PodSandboxId:27e9c427fa68a6f99f2f8a8406cdd476b8438adce8384012538d44a8e2386205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1693430636222575760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rqch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94d8d01-917b-406a-bda5-66947ab04669,},Annotations:map[string]string{io.kubernetes.container.hash: f3e01b84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada0722e293eb2414267404ea9e
c0e91f748d2f2a41d42b9f7b7d33a573c42a1,PodSandboxId:69a07bea0971c074fa228fcdf8b78baef8994a5f2a51e88bd86a980a85ca290b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693430635842713575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778e7cba-fb5b-46ba-a98d-9f56f8cd9d81,},Annotations:map[string]string{io.kubernetes.container.hash: ab904cb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed1c9414dad7d78e22f2d94bea
3de0a80a9071d9fa758ac89823bf6ad0076a,PodSandboxId:c44f8063825a4ee5ccf2113387cbc9d58e856da198e3b6e2d7bf1f0868b332f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1693430635316195674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j8947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfea0eb2-3aef-4cea-9936-dd9728c6fd03,},Annotations:map[string]string{io.kubernetes.container.hash: 86681f33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2afbb039ab222a4caa5cbf5f6e037f1f89af1249c3de7d068146b3b984c6f4,Pod
SandboxId:4e7b8aab7d12cd306aea0a59543c2ddccb10243322343a36a0e72fe26e70ead4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1693430611193409544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d294652a85c93f4ec83cca4c7df618ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9ab49a94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffb0051544b6926d80244d77f7780d055695db6b1f53c2ebc0919b1e3f4df77,PodSandboxId:3d3e9b0c6087c559b62d6f076bf126bf5aca
a612a2ab92043a04456ee3e9476f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1693430610388428775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b5dc57e93af736887993160a148f68806250fa030e8a1bd1172051b4adb94f,PodSandboxId:7ec585852a6209c409ecebdb18c9c9383e585be985
f07e5f851c68e662e23d6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1693430610009005813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0471c075deea5e6792790f77d6b4917179a87d50ea4933cd1d57bb87a78ac9,PodSandboxId:cb0087632dc3
67370f9719f73a247bd816089da189459a1a0eaf63b23c75610e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1693430609925536697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39521cdcdb4f2757ccf9ad6c8f246533,},Annotations:map[string]string{io.kubernetes.container.hash: c850525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ce930cec-fb1b-423c-b394-870e5cacdc57 name=/runtime.v1alpha2.Runti
meService/ListContainers
	Aug 30 21:27:04 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:04.947605812Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d340b147-9ceb-43ae-bca7-a12a24631c0a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:04 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:04.947666761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d340b147-9ceb-43ae-bca7-a12a24631c0a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:04 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:04.947918488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8f62be482b031a42dd8d3d997ba7903289647ab868e6048dbfd447a9d32ffa3,PodSandboxId:44a1a5d37d9fa35aff37a5a9c7fbccc23a6ccd3dad1f5ba896d081cd4dceed67,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430816070807946,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-scrwn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34aeae27-363f-4168-8e76-5949435b6f85,},Annotations:map[string]string{io.kubernetes.container.hash: acf072fd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b629cc3256e65854610795b25a79c7197d5d20cdcd8857b83c60db6ce737b,PodSandboxId:3c2e945b1feee547f1bb6bf4c665eaa6be68eed816cfe19331731035271f1289,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693430676268716471,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3dacb2e-a4c3-414a-9e73-713b175fc41f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9b63e2b9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c597752cb22a65652d1b60aab2b6e5a26d207dbf4d8dc18a7083b37ab8b57cb,PodSandboxId:bc16a9c6dbd2740f607a3671e033cae6553739166202e10a1a15ce8e33596fcc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1693430661640545499,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rhn4t,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d613e907-86be-4145-995b-7df1d1299386,},Annotations:map[string]string{io.kubernetes.container.hash: a58d8250,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:489883ee1e131e0626f5a5bfecaa890ab280aab756cd2faefbc018d3e6155495,PodSandboxId:5c5160b7dc7446cd6b9be07d0bc529ddc8798d30309c7b2dc5081db210928886,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652821562046,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j46hp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9448b898-17df-4119-ade4-fbd067c79fa6,},Annotations:map[string]string{io.kubernetes.container.hash: f8cae9ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce64cd316ef18f1bd9dabf8c80f5b3ee7fffa35d70c1eb5a1ec0632eb0037045,PodSandboxId:36fbb9559447537885f71a31724236090b433e5a7fe092f8e7e4065a93874e09,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652632665698,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cljv7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1d2ebe77-43ac-4e73-9d6a-bc49937a2bde,},Annotations:map[string]string{io.kubernetes.container.hash: 18edd2ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c75ce32767a7e73b6ed17e2f2aac3d91871d176d1459d0a61cf1f2ddbc8d3,PodSandboxId:27e9c427fa68a6f99f2f8a8406cdd476b8438adce8384012538d44a8e2386205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1693430636222575760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rqch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94d8d01-917b-406a-bda5-66947ab04669,},Annotations:map[string]string{io.kubernetes.container.hash: f3e01b84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada0722e293eb2414267404ea9e
c0e91f748d2f2a41d42b9f7b7d33a573c42a1,PodSandboxId:69a07bea0971c074fa228fcdf8b78baef8994a5f2a51e88bd86a980a85ca290b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693430635842713575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778e7cba-fb5b-46ba-a98d-9f56f8cd9d81,},Annotations:map[string]string{io.kubernetes.container.hash: ab904cb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed1c9414dad7d78e22f2d94bea
3de0a80a9071d9fa758ac89823bf6ad0076a,PodSandboxId:c44f8063825a4ee5ccf2113387cbc9d58e856da198e3b6e2d7bf1f0868b332f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1693430635316195674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j8947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfea0eb2-3aef-4cea-9936-dd9728c6fd03,},Annotations:map[string]string{io.kubernetes.container.hash: 86681f33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2afbb039ab222a4caa5cbf5f6e037f1f89af1249c3de7d068146b3b984c6f4,Pod
SandboxId:4e7b8aab7d12cd306aea0a59543c2ddccb10243322343a36a0e72fe26e70ead4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1693430611193409544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d294652a85c93f4ec83cca4c7df618ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9ab49a94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffb0051544b6926d80244d77f7780d055695db6b1f53c2ebc0919b1e3f4df77,PodSandboxId:3d3e9b0c6087c559b62d6f076bf126bf5aca
a612a2ab92043a04456ee3e9476f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1693430610388428775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b5dc57e93af736887993160a148f68806250fa030e8a1bd1172051b4adb94f,PodSandboxId:7ec585852a6209c409ecebdb18c9c9383e585be985
f07e5f851c68e662e23d6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1693430610009005813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0471c075deea5e6792790f77d6b4917179a87d50ea4933cd1d57bb87a78ac9,PodSandboxId:cb0087632dc3
67370f9719f73a247bd816089da189459a1a0eaf63b23c75610e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1693430609925536697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39521cdcdb4f2757ccf9ad6c8f246533,},Annotations:map[string]string{io.kubernetes.container.hash: c850525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d340b147-9ceb-43ae-bca7-a12a24631c0a name=/runtime.v1alpha2.Runti
meService/ListContainers
	Aug 30 21:27:04 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:04.979444699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=445a2e3d-68c8-4c1a-8695-bbfef38f16b2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:04 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:04.979510107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=445a2e3d-68c8-4c1a-8695-bbfef38f16b2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:04 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:04.979862715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8f62be482b031a42dd8d3d997ba7903289647ab868e6048dbfd447a9d32ffa3,PodSandboxId:44a1a5d37d9fa35aff37a5a9c7fbccc23a6ccd3dad1f5ba896d081cd4dceed67,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430816070807946,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-scrwn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34aeae27-363f-4168-8e76-5949435b6f85,},Annotations:map[string]string{io.kubernetes.container.hash: acf072fd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b629cc3256e65854610795b25a79c7197d5d20cdcd8857b83c60db6ce737b,PodSandboxId:3c2e945b1feee547f1bb6bf4c665eaa6be68eed816cfe19331731035271f1289,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693430676268716471,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3dacb2e-a4c3-414a-9e73-713b175fc41f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9b63e2b9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c597752cb22a65652d1b60aab2b6e5a26d207dbf4d8dc18a7083b37ab8b57cb,PodSandboxId:bc16a9c6dbd2740f607a3671e033cae6553739166202e10a1a15ce8e33596fcc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1693430661640545499,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rhn4t,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d613e907-86be-4145-995b-7df1d1299386,},Annotations:map[string]string{io.kubernetes.container.hash: a58d8250,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:489883ee1e131e0626f5a5bfecaa890ab280aab756cd2faefbc018d3e6155495,PodSandboxId:5c5160b7dc7446cd6b9be07d0bc529ddc8798d30309c7b2dc5081db210928886,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652821562046,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j46hp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9448b898-17df-4119-ade4-fbd067c79fa6,},Annotations:map[string]string{io.kubernetes.container.hash: f8cae9ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce64cd316ef18f1bd9dabf8c80f5b3ee7fffa35d70c1eb5a1ec0632eb0037045,PodSandboxId:36fbb9559447537885f71a31724236090b433e5a7fe092f8e7e4065a93874e09,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652632665698,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cljv7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1d2ebe77-43ac-4e73-9d6a-bc49937a2bde,},Annotations:map[string]string{io.kubernetes.container.hash: 18edd2ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c75ce32767a7e73b6ed17e2f2aac3d91871d176d1459d0a61cf1f2ddbc8d3,PodSandboxId:27e9c427fa68a6f99f2f8a8406cdd476b8438adce8384012538d44a8e2386205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1693430636222575760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rqch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94d8d01-917b-406a-bda5-66947ab04669,},Annotations:map[string]string{io.kubernetes.container.hash: f3e01b84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada0722e293eb2414267404ea9e
c0e91f748d2f2a41d42b9f7b7d33a573c42a1,PodSandboxId:69a07bea0971c074fa228fcdf8b78baef8994a5f2a51e88bd86a980a85ca290b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693430635842713575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778e7cba-fb5b-46ba-a98d-9f56f8cd9d81,},Annotations:map[string]string{io.kubernetes.container.hash: ab904cb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed1c9414dad7d78e22f2d94bea
3de0a80a9071d9fa758ac89823bf6ad0076a,PodSandboxId:c44f8063825a4ee5ccf2113387cbc9d58e856da198e3b6e2d7bf1f0868b332f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1693430635316195674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j8947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfea0eb2-3aef-4cea-9936-dd9728c6fd03,},Annotations:map[string]string{io.kubernetes.container.hash: 86681f33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2afbb039ab222a4caa5cbf5f6e037f1f89af1249c3de7d068146b3b984c6f4,Pod
SandboxId:4e7b8aab7d12cd306aea0a59543c2ddccb10243322343a36a0e72fe26e70ead4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1693430611193409544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d294652a85c93f4ec83cca4c7df618ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9ab49a94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffb0051544b6926d80244d77f7780d055695db6b1f53c2ebc0919b1e3f4df77,PodSandboxId:3d3e9b0c6087c559b62d6f076bf126bf5aca
a612a2ab92043a04456ee3e9476f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1693430610388428775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b5dc57e93af736887993160a148f68806250fa030e8a1bd1172051b4adb94f,PodSandboxId:7ec585852a6209c409ecebdb18c9c9383e585be985
f07e5f851c68e662e23d6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1693430610009005813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0471c075deea5e6792790f77d6b4917179a87d50ea4933cd1d57bb87a78ac9,PodSandboxId:cb0087632dc3
67370f9719f73a247bd816089da189459a1a0eaf63b23c75610e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1693430609925536697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39521cdcdb4f2757ccf9ad6c8f246533,},Annotations:map[string]string{io.kubernetes.container.hash: c850525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=445a2e3d-68c8-4c1a-8695-bbfef38f16b2 name=/runtime.v1alpha2.Runti
meService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.012136614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c291e008-ed22-45b5-b251-8224a8782506 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.012207988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c291e008-ed22-45b5-b251-8224a8782506 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.012556318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8f62be482b031a42dd8d3d997ba7903289647ab868e6048dbfd447a9d32ffa3,PodSandboxId:44a1a5d37d9fa35aff37a5a9c7fbccc23a6ccd3dad1f5ba896d081cd4dceed67,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430816070807946,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-scrwn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34aeae27-363f-4168-8e76-5949435b6f85,},Annotations:map[string]string{io.kubernetes.container.hash: acf072fd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b629cc3256e65854610795b25a79c7197d5d20cdcd8857b83c60db6ce737b,PodSandboxId:3c2e945b1feee547f1bb6bf4c665eaa6be68eed816cfe19331731035271f1289,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693430676268716471,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3dacb2e-a4c3-414a-9e73-713b175fc41f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9b63e2b9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c597752cb22a65652d1b60aab2b6e5a26d207dbf4d8dc18a7083b37ab8b57cb,PodSandboxId:bc16a9c6dbd2740f607a3671e033cae6553739166202e10a1a15ce8e33596fcc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1693430661640545499,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rhn4t,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d613e907-86be-4145-995b-7df1d1299386,},Annotations:map[string]string{io.kubernetes.container.hash: a58d8250,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:489883ee1e131e0626f5a5bfecaa890ab280aab756cd2faefbc018d3e6155495,PodSandboxId:5c5160b7dc7446cd6b9be07d0bc529ddc8798d30309c7b2dc5081db210928886,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652821562046,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j46hp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9448b898-17df-4119-ade4-fbd067c79fa6,},Annotations:map[string]string{io.kubernetes.container.hash: f8cae9ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce64cd316ef18f1bd9dabf8c80f5b3ee7fffa35d70c1eb5a1ec0632eb0037045,PodSandboxId:36fbb9559447537885f71a31724236090b433e5a7fe092f8e7e4065a93874e09,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652632665698,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cljv7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1d2ebe77-43ac-4e73-9d6a-bc49937a2bde,},Annotations:map[string]string{io.kubernetes.container.hash: 18edd2ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c75ce32767a7e73b6ed17e2f2aac3d91871d176d1459d0a61cf1f2ddbc8d3,PodSandboxId:27e9c427fa68a6f99f2f8a8406cdd476b8438adce8384012538d44a8e2386205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1693430636222575760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rqch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94d8d01-917b-406a-bda5-66947ab04669,},Annotations:map[string]string{io.kubernetes.container.hash: f3e01b84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada0722e293eb2414267404ea9e
c0e91f748d2f2a41d42b9f7b7d33a573c42a1,PodSandboxId:69a07bea0971c074fa228fcdf8b78baef8994a5f2a51e88bd86a980a85ca290b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693430635842713575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778e7cba-fb5b-46ba-a98d-9f56f8cd9d81,},Annotations:map[string]string{io.kubernetes.container.hash: ab904cb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed1c9414dad7d78e22f2d94bea
3de0a80a9071d9fa758ac89823bf6ad0076a,PodSandboxId:c44f8063825a4ee5ccf2113387cbc9d58e856da198e3b6e2d7bf1f0868b332f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1693430635316195674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j8947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfea0eb2-3aef-4cea-9936-dd9728c6fd03,},Annotations:map[string]string{io.kubernetes.container.hash: 86681f33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2afbb039ab222a4caa5cbf5f6e037f1f89af1249c3de7d068146b3b984c6f4,Pod
SandboxId:4e7b8aab7d12cd306aea0a59543c2ddccb10243322343a36a0e72fe26e70ead4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1693430611193409544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d294652a85c93f4ec83cca4c7df618ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9ab49a94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffb0051544b6926d80244d77f7780d055695db6b1f53c2ebc0919b1e3f4df77,PodSandboxId:3d3e9b0c6087c559b62d6f076bf126bf5aca
a612a2ab92043a04456ee3e9476f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1693430610388428775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b5dc57e93af736887993160a148f68806250fa030e8a1bd1172051b4adb94f,PodSandboxId:7ec585852a6209c409ecebdb18c9c9383e585be985
f07e5f851c68e662e23d6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1693430610009005813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0471c075deea5e6792790f77d6b4917179a87d50ea4933cd1d57bb87a78ac9,PodSandboxId:cb0087632dc3
67370f9719f73a247bd816089da189459a1a0eaf63b23c75610e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1693430609925536697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39521cdcdb4f2757ccf9ad6c8f246533,},Annotations:map[string]string{io.kubernetes.container.hash: c850525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c291e008-ed22-45b5-b251-8224a8782506 name=/runtime.v1alpha2.Runti
meService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.046483741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3422112b-27aa-4e2e-a2e9-b1733a06bcfd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.046548945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3422112b-27aa-4e2e-a2e9-b1733a06bcfd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.046776651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8f62be482b031a42dd8d3d997ba7903289647ab868e6048dbfd447a9d32ffa3,PodSandboxId:44a1a5d37d9fa35aff37a5a9c7fbccc23a6ccd3dad1f5ba896d081cd4dceed67,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430816070807946,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-scrwn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34aeae27-363f-4168-8e76-5949435b6f85,},Annotations:map[string]string{io.kubernetes.container.hash: acf072fd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b629cc3256e65854610795b25a79c7197d5d20cdcd8857b83c60db6ce737b,PodSandboxId:3c2e945b1feee547f1bb6bf4c665eaa6be68eed816cfe19331731035271f1289,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693430676268716471,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3dacb2e-a4c3-414a-9e73-713b175fc41f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9b63e2b9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c597752cb22a65652d1b60aab2b6e5a26d207dbf4d8dc18a7083b37ab8b57cb,PodSandboxId:bc16a9c6dbd2740f607a3671e033cae6553739166202e10a1a15ce8e33596fcc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1693430661640545499,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rhn4t,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d613e907-86be-4145-995b-7df1d1299386,},Annotations:map[string]string{io.kubernetes.container.hash: a58d8250,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:489883ee1e131e0626f5a5bfecaa890ab280aab756cd2faefbc018d3e6155495,PodSandboxId:5c5160b7dc7446cd6b9be07d0bc529ddc8798d30309c7b2dc5081db210928886,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652821562046,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j46hp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9448b898-17df-4119-ade4-fbd067c79fa6,},Annotations:map[string]string{io.kubernetes.container.hash: f8cae9ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce64cd316ef18f1bd9dabf8c80f5b3ee7fffa35d70c1eb5a1ec0632eb0037045,PodSandboxId:36fbb9559447537885f71a31724236090b433e5a7fe092f8e7e4065a93874e09,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652632665698,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cljv7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1d2ebe77-43ac-4e73-9d6a-bc49937a2bde,},Annotations:map[string]string{io.kubernetes.container.hash: 18edd2ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c75ce32767a7e73b6ed17e2f2aac3d91871d176d1459d0a61cf1f2ddbc8d3,PodSandboxId:27e9c427fa68a6f99f2f8a8406cdd476b8438adce8384012538d44a8e2386205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1693430636222575760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rqch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94d8d01-917b-406a-bda5-66947ab04669,},Annotations:map[string]string{io.kubernetes.container.hash: f3e01b84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada0722e293eb2414267404ea9e
c0e91f748d2f2a41d42b9f7b7d33a573c42a1,PodSandboxId:69a07bea0971c074fa228fcdf8b78baef8994a5f2a51e88bd86a980a85ca290b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693430635842713575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778e7cba-fb5b-46ba-a98d-9f56f8cd9d81,},Annotations:map[string]string{io.kubernetes.container.hash: ab904cb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed1c9414dad7d78e22f2d94bea
3de0a80a9071d9fa758ac89823bf6ad0076a,PodSandboxId:c44f8063825a4ee5ccf2113387cbc9d58e856da198e3b6e2d7bf1f0868b332f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1693430635316195674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j8947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfea0eb2-3aef-4cea-9936-dd9728c6fd03,},Annotations:map[string]string{io.kubernetes.container.hash: 86681f33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2afbb039ab222a4caa5cbf5f6e037f1f89af1249c3de7d068146b3b984c6f4,Pod
SandboxId:4e7b8aab7d12cd306aea0a59543c2ddccb10243322343a36a0e72fe26e70ead4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1693430611193409544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d294652a85c93f4ec83cca4c7df618ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9ab49a94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffb0051544b6926d80244d77f7780d055695db6b1f53c2ebc0919b1e3f4df77,PodSandboxId:3d3e9b0c6087c559b62d6f076bf126bf5aca
a612a2ab92043a04456ee3e9476f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1693430610388428775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b5dc57e93af736887993160a148f68806250fa030e8a1bd1172051b4adb94f,PodSandboxId:7ec585852a6209c409ecebdb18c9c9383e585be985
f07e5f851c68e662e23d6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1693430610009005813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0471c075deea5e6792790f77d6b4917179a87d50ea4933cd1d57bb87a78ac9,PodSandboxId:cb0087632dc3
67370f9719f73a247bd816089da189459a1a0eaf63b23c75610e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1693430609925536697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39521cdcdb4f2757ccf9ad6c8f246533,},Annotations:map[string]string{io.kubernetes.container.hash: c850525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3422112b-27aa-4e2e-a2e9-b1733a06bcfd name=/runtime.v1alpha2.Runti
meService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.078015325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=48d8db83-f689-4b4c-b14e-8b89e18ce29e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.078075302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=48d8db83-f689-4b4c-b14e-8b89e18ce29e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.078487489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8f62be482b031a42dd8d3d997ba7903289647ab868e6048dbfd447a9d32ffa3,PodSandboxId:44a1a5d37d9fa35aff37a5a9c7fbccc23a6ccd3dad1f5ba896d081cd4dceed67,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430816070807946,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-scrwn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34aeae27-363f-4168-8e76-5949435b6f85,},Annotations:map[string]string{io.kubernetes.container.hash: acf072fd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b629cc3256e65854610795b25a79c7197d5d20cdcd8857b83c60db6ce737b,PodSandboxId:3c2e945b1feee547f1bb6bf4c665eaa6be68eed816cfe19331731035271f1289,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693430676268716471,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3dacb2e-a4c3-414a-9e73-713b175fc41f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9b63e2b9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c597752cb22a65652d1b60aab2b6e5a26d207dbf4d8dc18a7083b37ab8b57cb,PodSandboxId:bc16a9c6dbd2740f607a3671e033cae6553739166202e10a1a15ce8e33596fcc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1693430661640545499,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rhn4t,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d613e907-86be-4145-995b-7df1d1299386,},Annotations:map[string]string{io.kubernetes.container.hash: a58d8250,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:489883ee1e131e0626f5a5bfecaa890ab280aab756cd2faefbc018d3e6155495,PodSandboxId:5c5160b7dc7446cd6b9be07d0bc529ddc8798d30309c7b2dc5081db210928886,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652821562046,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j46hp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9448b898-17df-4119-ade4-fbd067c79fa6,},Annotations:map[string]string{io.kubernetes.container.hash: f8cae9ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce64cd316ef18f1bd9dabf8c80f5b3ee7fffa35d70c1eb5a1ec0632eb0037045,PodSandboxId:36fbb9559447537885f71a31724236090b433e5a7fe092f8e7e4065a93874e09,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652632665698,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cljv7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1d2ebe77-43ac-4e73-9d6a-bc49937a2bde,},Annotations:map[string]string{io.kubernetes.container.hash: 18edd2ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c75ce32767a7e73b6ed17e2f2aac3d91871d176d1459d0a61cf1f2ddbc8d3,PodSandboxId:27e9c427fa68a6f99f2f8a8406cdd476b8438adce8384012538d44a8e2386205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1693430636222575760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rqch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94d8d01-917b-406a-bda5-66947ab04669,},Annotations:map[string]string{io.kubernetes.container.hash: f3e01b84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada0722e293eb2414267404ea9e
c0e91f748d2f2a41d42b9f7b7d33a573c42a1,PodSandboxId:69a07bea0971c074fa228fcdf8b78baef8994a5f2a51e88bd86a980a85ca290b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693430635842713575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778e7cba-fb5b-46ba-a98d-9f56f8cd9d81,},Annotations:map[string]string{io.kubernetes.container.hash: ab904cb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed1c9414dad7d78e22f2d94bea
3de0a80a9071d9fa758ac89823bf6ad0076a,PodSandboxId:c44f8063825a4ee5ccf2113387cbc9d58e856da198e3b6e2d7bf1f0868b332f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1693430635316195674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j8947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfea0eb2-3aef-4cea-9936-dd9728c6fd03,},Annotations:map[string]string{io.kubernetes.container.hash: 86681f33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2afbb039ab222a4caa5cbf5f6e037f1f89af1249c3de7d068146b3b984c6f4,Pod
SandboxId:4e7b8aab7d12cd306aea0a59543c2ddccb10243322343a36a0e72fe26e70ead4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1693430611193409544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d294652a85c93f4ec83cca4c7df618ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9ab49a94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffb0051544b6926d80244d77f7780d055695db6b1f53c2ebc0919b1e3f4df77,PodSandboxId:3d3e9b0c6087c559b62d6f076bf126bf5aca
a612a2ab92043a04456ee3e9476f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1693430610388428775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b5dc57e93af736887993160a148f68806250fa030e8a1bd1172051b4adb94f,PodSandboxId:7ec585852a6209c409ecebdb18c9c9383e585be985
f07e5f851c68e662e23d6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1693430610009005813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0471c075deea5e6792790f77d6b4917179a87d50ea4933cd1d57bb87a78ac9,PodSandboxId:cb0087632dc3
67370f9719f73a247bd816089da189459a1a0eaf63b23c75610e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1693430609925536697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39521cdcdb4f2757ccf9ad6c8f246533,},Annotations:map[string]string{io.kubernetes.container.hash: c850525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=48d8db83-f689-4b4c-b14e-8b89e18ce29e name=/runtime.v1alpha2.Runti
meService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.112439345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5366988a-a129-4529-8182-0721befd13ad name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.112505211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5366988a-a129-4529-8182-0721befd13ad name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.112815395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8f62be482b031a42dd8d3d997ba7903289647ab868e6048dbfd447a9d32ffa3,PodSandboxId:44a1a5d37d9fa35aff37a5a9c7fbccc23a6ccd3dad1f5ba896d081cd4dceed67,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430816070807946,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-scrwn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34aeae27-363f-4168-8e76-5949435b6f85,},Annotations:map[string]string{io.kubernetes.container.hash: acf072fd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b629cc3256e65854610795b25a79c7197d5d20cdcd8857b83c60db6ce737b,PodSandboxId:3c2e945b1feee547f1bb6bf4c665eaa6be68eed816cfe19331731035271f1289,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693430676268716471,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3dacb2e-a4c3-414a-9e73-713b175fc41f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9b63e2b9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c597752cb22a65652d1b60aab2b6e5a26d207dbf4d8dc18a7083b37ab8b57cb,PodSandboxId:bc16a9c6dbd2740f607a3671e033cae6553739166202e10a1a15ce8e33596fcc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1693430661640545499,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rhn4t,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d613e907-86be-4145-995b-7df1d1299386,},Annotations:map[string]string{io.kubernetes.container.hash: a58d8250,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:489883ee1e131e0626f5a5bfecaa890ab280aab756cd2faefbc018d3e6155495,PodSandboxId:5c5160b7dc7446cd6b9be07d0bc529ddc8798d30309c7b2dc5081db210928886,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652821562046,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j46hp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9448b898-17df-4119-ade4-fbd067c79fa6,},Annotations:map[string]string{io.kubernetes.container.hash: f8cae9ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce64cd316ef18f1bd9dabf8c80f5b3ee7fffa35d70c1eb5a1ec0632eb0037045,PodSandboxId:36fbb9559447537885f71a31724236090b433e5a7fe092f8e7e4065a93874e09,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652632665698,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cljv7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1d2ebe77-43ac-4e73-9d6a-bc49937a2bde,},Annotations:map[string]string{io.kubernetes.container.hash: 18edd2ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c75ce32767a7e73b6ed17e2f2aac3d91871d176d1459d0a61cf1f2ddbc8d3,PodSandboxId:27e9c427fa68a6f99f2f8a8406cdd476b8438adce8384012538d44a8e2386205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1693430636222575760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rqch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94d8d01-917b-406a-bda5-66947ab04669,},Annotations:map[string]string{io.kubernetes.container.hash: f3e01b84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada0722e293eb2414267404ea9e
c0e91f748d2f2a41d42b9f7b7d33a573c42a1,PodSandboxId:69a07bea0971c074fa228fcdf8b78baef8994a5f2a51e88bd86a980a85ca290b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693430635842713575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778e7cba-fb5b-46ba-a98d-9f56f8cd9d81,},Annotations:map[string]string{io.kubernetes.container.hash: ab904cb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed1c9414dad7d78e22f2d94bea
3de0a80a9071d9fa758ac89823bf6ad0076a,PodSandboxId:c44f8063825a4ee5ccf2113387cbc9d58e856da198e3b6e2d7bf1f0868b332f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1693430635316195674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j8947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfea0eb2-3aef-4cea-9936-dd9728c6fd03,},Annotations:map[string]string{io.kubernetes.container.hash: 86681f33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2afbb039ab222a4caa5cbf5f6e037f1f89af1249c3de7d068146b3b984c6f4,Pod
SandboxId:4e7b8aab7d12cd306aea0a59543c2ddccb10243322343a36a0e72fe26e70ead4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1693430611193409544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d294652a85c93f4ec83cca4c7df618ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9ab49a94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffb0051544b6926d80244d77f7780d055695db6b1f53c2ebc0919b1e3f4df77,PodSandboxId:3d3e9b0c6087c559b62d6f076bf126bf5aca
a612a2ab92043a04456ee3e9476f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1693430610388428775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b5dc57e93af736887993160a148f68806250fa030e8a1bd1172051b4adb94f,PodSandboxId:7ec585852a6209c409ecebdb18c9c9383e585be985
f07e5f851c68e662e23d6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1693430610009005813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0471c075deea5e6792790f77d6b4917179a87d50ea4933cd1d57bb87a78ac9,PodSandboxId:cb0087632dc3
67370f9719f73a247bd816089da189459a1a0eaf63b23c75610e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1693430609925536697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39521cdcdb4f2757ccf9ad6c8f246533,},Annotations:map[string]string{io.kubernetes.container.hash: c850525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5366988a-a129-4529-8182-0721befd13ad name=/runtime.v1alpha2.Runti
meService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.140738684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=82809d6b-8f89-4d59-9901-5c371d341f11 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.140805067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=82809d6b-8f89-4d59-9901-5c371d341f11 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:27:05 ingress-addon-legacy-306023 crio[720]: time="2023-08-30 21:27:05.141064853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8f62be482b031a42dd8d3d997ba7903289647ab868e6048dbfd447a9d32ffa3,PodSandboxId:44a1a5d37d9fa35aff37a5a9c7fbccc23a6ccd3dad1f5ba896d081cd4dceed67,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1693430816070807946,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-scrwn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34aeae27-363f-4168-8e76-5949435b6f85,},Annotations:map[string]string{io.kubernetes.container.hash: acf072fd,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b629cc3256e65854610795b25a79c7197d5d20cdcd8857b83c60db6ce737b,PodSandboxId:3c2e945b1feee547f1bb6bf4c665eaa6be68eed816cfe19331731035271f1289,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1693430676268716471,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3dacb2e-a4c3-414a-9e73-713b175fc41f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 9b63e2b9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c597752cb22a65652d1b60aab2b6e5a26d207dbf4d8dc18a7083b37ab8b57cb,PodSandboxId:bc16a9c6dbd2740f607a3671e033cae6553739166202e10a1a15ce8e33596fcc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1693430661640545499,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-rhn4t,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d613e907-86be-4145-995b-7df1d1299386,},Annotations:map[string]string{io.kubernetes.container.hash: a58d8250,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:489883ee1e131e0626f5a5bfecaa890ab280aab756cd2faefbc018d3e6155495,PodSandboxId:5c5160b7dc7446cd6b9be07d0bc529ddc8798d30309c7b2dc5081db210928886,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652821562046,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j46hp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9448b898-17df-4119-ade4-fbd067c79fa6,},Annotations:map[string]string{io.kubernetes.container.hash: f8cae9ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce64cd316ef18f1bd9dabf8c80f5b3ee7fffa35d70c1eb5a1ec0632eb0037045,PodSandboxId:36fbb9559447537885f71a31724236090b433e5a7fe092f8e7e4065a93874e09,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1693430652632665698,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cljv7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1d2ebe77-43ac-4e73-9d6a-bc49937a2bde,},Annotations:map[string]string{io.kubernetes.container.hash: 18edd2ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af9c75ce32767a7e73b6ed17e2f2aac3d91871d176d1459d0a61cf1f2ddbc8d3,PodSandboxId:27e9c427fa68a6f99f2f8a8406cdd476b8438adce8384012538d44a8e2386205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1693430636222575760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rqch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94d8d01-917b-406a-bda5-66947ab04669,},Annotations:map[string]string{io.kubernetes.container.hash: f3e01b84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada0722e293eb2414267404ea9e
c0e91f748d2f2a41d42b9f7b7d33a573c42a1,PodSandboxId:69a07bea0971c074fa228fcdf8b78baef8994a5f2a51e88bd86a980a85ca290b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693430635842713575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778e7cba-fb5b-46ba-a98d-9f56f8cd9d81,},Annotations:map[string]string{io.kubernetes.container.hash: ab904cb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ed1c9414dad7d78e22f2d94bea
3de0a80a9071d9fa758ac89823bf6ad0076a,PodSandboxId:c44f8063825a4ee5ccf2113387cbc9d58e856da198e3b6e2d7bf1f0868b332f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1693430635316195674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j8947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfea0eb2-3aef-4cea-9936-dd9728c6fd03,},Annotations:map[string]string{io.kubernetes.container.hash: 86681f33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db2afbb039ab222a4caa5cbf5f6e037f1f89af1249c3de7d068146b3b984c6f4,Pod
SandboxId:4e7b8aab7d12cd306aea0a59543c2ddccb10243322343a36a0e72fe26e70ead4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1693430611193409544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d294652a85c93f4ec83cca4c7df618ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9ab49a94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ffb0051544b6926d80244d77f7780d055695db6b1f53c2ebc0919b1e3f4df77,PodSandboxId:3d3e9b0c6087c559b62d6f076bf126bf5aca
a612a2ab92043a04456ee3e9476f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1693430610388428775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b5dc57e93af736887993160a148f68806250fa030e8a1bd1172051b4adb94f,PodSandboxId:7ec585852a6209c409ecebdb18c9c9383e585be985
f07e5f851c68e662e23d6b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1693430610009005813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a0471c075deea5e6792790f77d6b4917179a87d50ea4933cd1d57bb87a78ac9,PodSandboxId:cb0087632dc3
67370f9719f73a247bd816089da189459a1a0eaf63b23c75610e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1693430609925536697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-306023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39521cdcdb4f2757ccf9ad6c8f246533,},Annotations:map[string]string{io.kubernetes.container.hash: c850525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=82809d6b-8f89-4d59-9901-5c371d341f11 name=/runtime.v1alpha2.Runti
meService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	b8f62be482b03       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            9 seconds ago       Running             hello-world-app           0                   44a1a5d37d9fa
	ad6b629cc3256       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   3c2e945b1feee
	2c597752cb22a       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   bc16a9c6dbd27
	489883ee1e131       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   5c5160b7dc744
	ce64cd316ef18       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   36fbb95594475
	af9c75ce32767       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   27e9c427fa68a
	ada0722e293eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   69a07bea0971c
	a6ed1c9414dad       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   c44f8063825a4
	db2afbb039ab2       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   4e7b8aab7d12c
	1ffb0051544b6       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   3d3e9b0c6087c
	47b5dc57e93af       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   7ec585852a620
	1a0471c075dee       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   cb0087632dc36
	
	* 
	* ==> coredns [af9c75ce32767a7e73b6ed17e2f2aac3d91871d176d1459d0a61cf1f2ddbc8d3] <==
	* [INFO] 10.244.0.6:33583 - 8697 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000241382s
	[INFO] 10.244.0.6:55910 - 61027 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068547s
	[INFO] 10.244.0.6:55910 - 35608 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068813s
	[INFO] 10.244.0.6:55910 - 32391 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000099728s
	[INFO] 10.244.0.6:33583 - 9631 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000225512s
	[INFO] 10.244.0.6:33583 - 56290 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071175s
	[INFO] 10.244.0.6:55910 - 64122 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00009258s
	[INFO] 10.244.0.6:33583 - 42636 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000075718s
	[INFO] 10.244.0.6:55910 - 658 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083163s
	[INFO] 10.244.0.6:55910 - 37597 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000145395s
	[INFO] 10.244.0.6:33583 - 63148 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081122s
	[INFO] 10.244.0.6:48821 - 14840 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087332s
	[INFO] 10.244.0.6:37466 - 53082 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076036s
	[INFO] 10.244.0.6:48821 - 26639 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070942s
	[INFO] 10.244.0.6:37466 - 35899 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068727s
	[INFO] 10.244.0.6:48821 - 9997 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000096305s
	[INFO] 10.244.0.6:37466 - 64303 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028999s
	[INFO] 10.244.0.6:48821 - 46828 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036015s
	[INFO] 10.244.0.6:37466 - 8838 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000023165s
	[INFO] 10.244.0.6:37466 - 47900 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038749s
	[INFO] 10.244.0.6:48821 - 12380 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000022953s
	[INFO] 10.244.0.6:48821 - 20190 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000568034s
	[INFO] 10.244.0.6:37466 - 37279 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00012114s
	[INFO] 10.244.0.6:48821 - 29514 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000210846s
	[INFO] 10.244.0.6:37466 - 2230 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041745s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-306023
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-306023
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=ingress-addon-legacy-306023
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T21_23_38_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:23:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-306023
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 21:26:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 21:24:38 +0000   Wed, 30 Aug 2023 21:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 21:24:38 +0000   Wed, 30 Aug 2023 21:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 21:24:38 +0000   Wed, 30 Aug 2023 21:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 21:24:38 +0000   Wed, 30 Aug 2023 21:23:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    ingress-addon-legacy-306023
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0791d3037c248fba0f2cfe0d68160a3
	  System UUID:                f0791d30-37c2-48fb-a0f2-cfe0d68160a3
	  Boot ID:                    2ced0414-0779-4dd3-bb55-e1f7ba5d185a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-scrwn                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 coredns-66bff467f8-8rqch                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m12s
	  kube-system                 etcd-ingress-addon-legacy-306023                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-apiserver-ingress-addon-legacy-306023             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-306023    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 kube-proxy-j8947                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 kube-scheduler-ingress-addon-legacy-306023             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m37s (x4 over 3m37s)  kubelet     Node ingress-addon-legacy-306023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m37s (x3 over 3m37s)  kubelet     Node ingress-addon-legacy-306023 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m27s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m27s                  kubelet     Node ingress-addon-legacy-306023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s                  kubelet     Node ingress-addon-legacy-306023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s                  kubelet     Node ingress-addon-legacy-306023 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m17s                  kubelet     Node ingress-addon-legacy-306023 status is now: NodeReady
	  Normal  Starting                 3m10s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug30 21:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.098779] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.368420] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug30 21:23] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145758] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.034442] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.396046] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.098152] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.139589] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.099929] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.197696] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +7.571077] systemd-fstab-generator[1036]: Ignoring "noauto" for root device
	[  +3.160169] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.188667] systemd-fstab-generator[1437]: Ignoring "noauto" for root device
	[ +17.815645] kauditd_printk_skb: 6 callbacks suppressed
	[Aug30 21:24] kauditd_printk_skb: 18 callbacks suppressed
	[ +30.994799] kauditd_printk_skb: 21 callbacks suppressed
	[Aug30 21:26] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.823538] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [db2afbb039ab222a4caa5cbf5f6e037f1f89af1249c3de7d068146b3b984c6f4] <==
	* 2023-08-30 21:23:31.406443 I | etcdserver: b60ca5935c0b4769 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/08/30 21:23:31 INFO: b60ca5935c0b4769 switched to configuration voters=(13118041866946430825)
	2023-08-30 21:23:31.407493 I | etcdserver/membership: added member b60ca5935c0b4769 [https://192.168.39.247:2380] to cluster 7fda2fc0436a8884
	2023-08-30 21:23:31.410174 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-30 21:23:31.410615 I | embed: listening for peers on 192.168.39.247:2380
	2023-08-30 21:23:31.410992 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/08/30 21:23:31 INFO: b60ca5935c0b4769 is starting a new election at term 1
	raft2023/08/30 21:23:31 INFO: b60ca5935c0b4769 became candidate at term 2
	raft2023/08/30 21:23:31 INFO: b60ca5935c0b4769 received MsgVoteResp from b60ca5935c0b4769 at term 2
	raft2023/08/30 21:23:31 INFO: b60ca5935c0b4769 became leader at term 2
	raft2023/08/30 21:23:31 INFO: raft.node: b60ca5935c0b4769 elected leader b60ca5935c0b4769 at term 2
	2023-08-30 21:23:31.993815 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-30 21:23:31.995219 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-30 21:23:31.995407 I | etcdserver/api: enabled capabilities for version 3.4
	2023-08-30 21:23:31.995452 I | etcdserver: published {Name:ingress-addon-legacy-306023 ClientURLs:[https://192.168.39.247:2379]} to cluster 7fda2fc0436a8884
	2023-08-30 21:23:31.995468 I | embed: ready to serve client requests
	2023-08-30 21:23:31.996315 I | embed: ready to serve client requests
	2023-08-30 21:23:31.997389 I | embed: serving client requests on 192.168.39.247:2379
	2023-08-30 21:23:32.001890 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-30 21:23:53.966630 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 " with result "range_response_count:9 size:6484" took too long (292.058414ms) to execute
	2023-08-30 21:23:53.966895 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (194.968708ms) to execute
	2023-08-30 21:23:53.966930 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-public/default\" " with result "range_response_count:1 size:181" took too long (579.79787ms) to execute
	2023-08-30 21:24:18.853238 W | etcdserver: read-only range request "key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" " with result "range_response_count:3 size:13726" took too long (236.058378ms) to execute
	2023-08-30 21:24:26.954028 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (384.413981ms) to execute
	2023-08-30 21:24:28.810403 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (249.054353ms) to execute
	
	* 
	* ==> kernel <==
	*  21:27:05 up 4 min,  0 users,  load average: 0.88, 0.64, 0.29
	Linux ingress-addon-legacy-306023 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [1a0471c075deea5e6792790f77d6b4917179a87d50ea4933cd1d57bb87a78ac9] <==
	* Trace[285404078]: [553.322348ms] [552.424961ms] Transaction committed
	I0830 21:23:53.982468       1 trace.go:116] Trace[1949609140]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/view,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:192.168.39.247 (started: 2023-08-30 21:23:53.425161437 +0000 UTC m=+23.338375144) (total time: 557.28391ms):
	Trace[1949609140]: [557.226035ms] [556.989838ms] Object stored in database
	I0830 21:23:53.982669       1 trace.go:116] Trace[919813570]: "GuaranteedUpdate etcd3" type:*rbac.ClusterRole (started: 2023-08-30 21:23:53.427118969 +0000 UTC m=+23.340332757) (total time: 555.538261ms):
	Trace[919813570]: [555.504079ms] [553.969912ms] Transaction committed
	I0830 21:23:53.982748       1 trace.go:116] Trace[2086468294]: "Update" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/edit,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:clusterrole-aggregation-controller,client:192.168.39.247 (started: 2023-08-30 21:23:53.42676343 +0000 UTC m=+23.339977148) (total time: 555.972797ms):
	Trace[2086468294]: [555.943337ms] [555.649605ms] Object stored in database
	I0830 21:23:53.982904       1 trace.go:116] Trace[1655543575]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (started: 2023-08-30 21:23:53.436598685 +0000 UTC m=+23.349812404) (total time: 546.291542ms):
	Trace[1655543575]: [546.27097ms] [546.023353ms] Transaction committed
	I0830 21:23:53.982987       1 trace.go:116] Trace[1546183523]: "Update" url:/api/v1/namespaces/kube-public/configmaps/cluster-info,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:bootstrap-signer,client:192.168.39.247 (started: 2023-08-30 21:23:53.435819734 +0000 UTC m=+23.349033437) (total time: 547.155128ms):
	Trace[1546183523]: [547.126446ms] [546.378583ms] Object stored in database
	I0830 21:23:53.983244       1 trace.go:116] Trace[344079065]: "Patch" url:/api/v1/nodes/ingress-addon-legacy-306023,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:ttl-controller,client:192.168.39.247 (started: 2023-08-30 21:23:53.387903504 +0000 UTC m=+23.301117211) (total time: 595.319593ms):
	Trace[344079065]: [590.518346ms] [589.02611ms] Object stored in database
	I0830 21:23:53.984117       1 trace.go:116] Trace[593772942]: "Update" url:/api/v1/namespaces/kube-system/serviceaccounts/default,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/tokens-controller,client:192.168.39.247 (started: 2023-08-30 21:23:53.396527894 +0000 UTC m=+23.309741598) (total time: 587.571184ms):
	Trace[593772942]: [572.896096ms] [572.591056ms] Object stored in database
	I0830 21:23:53.990176       1 trace.go:116] Trace[1435457523]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2023-08-30 21:23:53.422613706 +0000 UTC m=+23.335827435) (total time: 567.498144ms):
	Trace[1435457523]: [552.766863ms] [543.371474ms] Transaction committed
	I0830 21:23:53.998370       1 trace.go:116] Trace[1594077592]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2023-08-30 21:23:53.410061932 +0000 UTC m=+23.323275644) (total time: 588.29204ms):
	Trace[1594077592]: [566.86214ms] [565.108872ms] Transaction committed
	I0830 21:23:54.011103       1 trace.go:116] Trace[1721108675]: "Patch" url:/api/v1/nodes/ingress-addon-legacy-306023,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:node-controller,client:192.168.39.247 (started: 2023-08-30 21:23:53.422348624 +0000 UTC m=+23.335562336) (total time: 588.734569ms):
	Trace[1721108675]: [553.084332ms] [544.549616ms] About to apply patch
	I0830 21:23:54.013596       1 trace.go:116] Trace[1875009693]: "Patch" url:/api/v1/nodes/ingress-addon-legacy-306023,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:node-controller,client:192.168.39.247 (started: 2023-08-30 21:23:53.409977363 +0000 UTC m=+23.323191109) (total time: 603.589994ms):
	Trace[1875009693]: [566.990131ms] [565.450548ms] About to apply patch
	I0830 21:24:10.442440       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0830 21:24:32.851688       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [47b5dc57e93af736887993160a148f68806250fa030e8a1bd1172051b4adb94f] <==
	* I0830 21:23:53.975329       1 shared_informer.go:230] Caches are synced for resource quota 
	I0830 21:23:53.992832       1 shared_informer.go:230] Caches are synced for job 
	I0830 21:23:54.016485       1 range_allocator.go:373] Set node ingress-addon-legacy-306023 PodCIDR to [10.244.0.0/24]
	I0830 21:23:54.017182       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"955f0583-b85e-4788-b3ab-f5b38d3d9bae", APIVersion:"apps/v1", ResourceVersion:"315", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-8rqch
	E0830 21:23:54.022645       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0830 21:23:54.028498       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0830 21:23:54.028628       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0830 21:23:54.028706       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0830 21:23:54.033201       1 shared_informer.go:230] Caches are synced for resource quota 
	I0830 21:23:54.068128       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"955f0583-b85e-4788-b3ab-f5b38d3d9bae", APIVersion:"apps/v1", ResourceVersion:"315", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-grg77
	I0830 21:23:54.070388       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"fb98ccdd-d77a-4c73-9383-82d19ba5d7a2", APIVersion:"apps/v1", ResourceVersion:"206", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-j8947
	E0830 21:23:54.090531       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0830 21:23:54.166463       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0830 21:23:54.172629       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"fb98ccdd-d77a-4c73-9383-82d19ba5d7a2", ResourceVersion:"206", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63829027418, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001959980), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc0019599a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019599c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0019867c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc0019599e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001959a00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001959a40)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000a895e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000b22fd8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0002a2b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000ebe8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000b23028)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0830 21:23:54.337890       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"fbd86aee-6673-4895-bd61-c8554199e167", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0830 21:23:54.383902       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"955f0583-b85e-4788-b3ab-f5b38d3d9bae", APIVersion:"apps/v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-grg77
	I0830 21:24:10.396475       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"ab2b42ff-0277-4b7c-8fa8-5d9020677e2e", APIVersion:"apps/v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0830 21:24:10.421079       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"ae96c2bb-5f06-4647-90a7-d8d5bacbec39", APIVersion:"apps/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-rhn4t
	I0830 21:24:10.492883       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"01f3ccbb-c596-4c1f-9930-15fb0e779621", APIVersion:"batch/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-cljv7
	I0830 21:24:10.563976       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e7cfa567-8051-448b-abb0-8bf7d259414d", APIVersion:"batch/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-j46hp
	I0830 21:24:12.840205       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"01f3ccbb-c596-4c1f-9930-15fb0e779621", APIVersion:"batch/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0830 21:24:13.847180       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e7cfa567-8051-448b-abb0-8bf7d259414d", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0830 21:26:53.799202       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"22de7c8d-2a1a-4952-bde8-9bc4b9c470ac", APIVersion:"apps/v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0830 21:26:53.815670       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"13dee24b-68d4-48b4-95e0-c711b9f33dbe", APIVersion:"apps/v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-scrwn
	
	* 
	* ==> kube-proxy [a6ed1c9414dad7d78e22f2d94bea3de0a80a9071d9fa758ac89823bf6ad0076a] <==
	* W0830 21:23:55.555797       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0830 21:23:55.566059       1 node.go:136] Successfully retrieved node IP: 192.168.39.247
	I0830 21:23:55.566166       1 server_others.go:186] Using iptables Proxier.
	I0830 21:23:55.567547       1 server.go:583] Version: v1.18.20
	I0830 21:23:55.580755       1 config.go:315] Starting service config controller
	I0830 21:23:55.580891       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0830 21:23:55.581131       1 config.go:133] Starting endpoints config controller
	I0830 21:23:55.581455       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0830 21:23:55.681322       1 shared_informer.go:230] Caches are synced for service config 
	I0830 21:23:55.681827       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [1ffb0051544b6926d80244d77f7780d055695db6b1f53c2ebc0919b1e3f4df77] <==
	* W0830 21:23:34.985981       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0830 21:23:35.014015       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0830 21:23:35.014130       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0830 21:23:35.016816       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 21:23:35.016884       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 21:23:35.017144       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0830 21:23:35.017385       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0830 21:23:35.043570       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 21:23:35.043831       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 21:23:35.043960       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 21:23:35.044037       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 21:23:35.044112       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 21:23:35.044189       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 21:23:35.044356       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 21:23:35.044501       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 21:23:35.044608       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 21:23:35.044728       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 21:23:35.044809       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 21:23:35.044836       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 21:23:35.857211       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 21:23:35.878537       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 21:23:36.067333       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 21:23:36.075573       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 21:23:36.315175       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0830 21:23:38.217126       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 21:23:03 UTC, ends at Wed 2023-08-30 21:27:05 UTC. --
	Aug 30 21:24:14 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:24:14.928996    1444 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9448b898-17df-4119-ade4-fbd067c79fa6-ingress-nginx-admission-token-v5scb" (OuterVolumeSpecName: "ingress-nginx-admission-token-v5scb") pod "9448b898-17df-4119-ade4-fbd067c79fa6" (UID: "9448b898-17df-4119-ade4-fbd067c79fa6"). InnerVolumeSpecName "ingress-nginx-admission-token-v5scb". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 21:24:15 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:24:15.023625    1444 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-v5scb" (UniqueName: "kubernetes.io/secret/9448b898-17df-4119-ade4-fbd067c79fa6-ingress-nginx-admission-token-v5scb") on node "ingress-addon-legacy-306023" DevicePath ""
	Aug 30 21:24:22 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:24:22.727380    1444 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Aug 30 21:24:22 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:24:22.856791    1444 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-ggxxl" (UniqueName: "kubernetes.io/secret/4784b90c-9c59-4a13-80dd-93c7386c8305-minikube-ingress-dns-token-ggxxl") pod "kube-ingress-dns-minikube" (UID: "4784b90c-9c59-4a13-80dd-93c7386c8305")
	Aug 30 21:24:33 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:24:33.032068    1444 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Aug 30 21:24:33 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:24:33.197464    1444 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-n9c65" (UniqueName: "kubernetes.io/secret/f3dacb2e-a4c3-414a-9e73-713b175fc41f-default-token-n9c65") pod "nginx" (UID: "f3dacb2e-a4c3-414a-9e73-713b175fc41f")
	Aug 30 21:26:53 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:26:53.822171    1444 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Aug 30 21:26:53 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:26:53.946222    1444 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-n9c65" (UniqueName: "kubernetes.io/secret/34aeae27-363f-4168-8e76-5949435b6f85-default-token-n9c65") pod "hello-world-app-5f5d8b66bb-scrwn" (UID: "34aeae27-363f-4168-8e76-5949435b6f85")
	Aug 30 21:26:55 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:26:55.773881    1444 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 222b9d5a8c93b609ffa12ecb245287875516290a0ca91e9b88c7fe4cc85c420e
	Aug 30 21:26:55 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:26:55.851683    1444 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-ggxxl" (UniqueName: "kubernetes.io/secret/4784b90c-9c59-4a13-80dd-93c7386c8305-minikube-ingress-dns-token-ggxxl") pod "4784b90c-9c59-4a13-80dd-93c7386c8305" (UID: "4784b90c-9c59-4a13-80dd-93c7386c8305")
	Aug 30 21:26:55 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:26:55.857083    1444 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4784b90c-9c59-4a13-80dd-93c7386c8305-minikube-ingress-dns-token-ggxxl" (OuterVolumeSpecName: "minikube-ingress-dns-token-ggxxl") pod "4784b90c-9c59-4a13-80dd-93c7386c8305" (UID: "4784b90c-9c59-4a13-80dd-93c7386c8305"). InnerVolumeSpecName "minikube-ingress-dns-token-ggxxl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 21:26:55 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:26:55.875866    1444 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 222b9d5a8c93b609ffa12ecb245287875516290a0ca91e9b88c7fe4cc85c420e
	Aug 30 21:26:55 ingress-addon-legacy-306023 kubelet[1444]: E0830 21:26:55.876482    1444 remote_runtime.go:295] ContainerStatus "222b9d5a8c93b609ffa12ecb245287875516290a0ca91e9b88c7fe4cc85c420e" from runtime service failed: rpc error: code = NotFound desc = could not find container "222b9d5a8c93b609ffa12ecb245287875516290a0ca91e9b88c7fe4cc85c420e": container with ID starting with 222b9d5a8c93b609ffa12ecb245287875516290a0ca91e9b88c7fe4cc85c420e not found: ID does not exist
	Aug 30 21:26:55 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:26:55.952093    1444 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-ggxxl" (UniqueName: "kubernetes.io/secret/4784b90c-9c59-4a13-80dd-93c7386c8305-minikube-ingress-dns-token-ggxxl") on node "ingress-addon-legacy-306023" DevicePath ""
	Aug 30 21:26:56 ingress-addon-legacy-306023 kubelet[1444]: E0830 21:26:56.513441    1444 kubelet_pods.go:1235] Failed killing the pod "kube-ingress-dns-minikube": failed to "KillContainer" for "minikube-ingress-dns" with KillContainerError: "rpc error: code = NotFound desc = could not find container \"222b9d5a8c93b609ffa12ecb245287875516290a0ca91e9b88c7fe4cc85c420e\": container with ID starting with 222b9d5a8c93b609ffa12ecb245287875516290a0ca91e9b88c7fe4cc85c420e not found: ID does not exist"
	Aug 30 21:26:57 ingress-addon-legacy-306023 kubelet[1444]: E0830 21:26:57.751738    1444 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-rhn4t.1780465b47af67cf", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-rhn4t", UID:"d613e907-86be-4145-995b-7df1d1299386", APIVersion:"v1", ResourceVersion:"458", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-306023"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1340ae86cb35dcf, ext:199845299747, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1340ae86cb35dcf, ext:199845299747, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-rhn4t.1780465b47af67cf" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 30 21:26:57 ingress-addon-legacy-306023 kubelet[1444]: E0830 21:26:57.768819    1444 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-rhn4t.1780465b47af67cf", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-rhn4t", UID:"d613e907-86be-4145-995b-7df1d1299386", APIVersion:"v1", ResourceVersion:"458", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-306023"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1340ae86cb35dcf, ext:199845299747, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1340ae86d876701, ext:199859195734, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-rhn4t.1780465b47af67cf" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 30 21:27:00 ingress-addon-legacy-306023 kubelet[1444]: W0830 21:27:00.795564    1444 pod_container_deletor.go:77] Container "bc16a9c6dbd2740f607a3671e033cae6553739166202e10a1a15ce8e33596fcc" not found in pod's containers
	Aug 30 21:27:01 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:27:01.872647    1444 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-j5fpj" (UniqueName: "kubernetes.io/secret/d613e907-86be-4145-995b-7df1d1299386-ingress-nginx-token-j5fpj") pod "d613e907-86be-4145-995b-7df1d1299386" (UID: "d613e907-86be-4145-995b-7df1d1299386")
	Aug 30 21:27:01 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:27:01.872713    1444 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d613e907-86be-4145-995b-7df1d1299386-webhook-cert") pod "d613e907-86be-4145-995b-7df1d1299386" (UID: "d613e907-86be-4145-995b-7df1d1299386")
	Aug 30 21:27:01 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:27:01.875365    1444 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d613e907-86be-4145-995b-7df1d1299386-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d613e907-86be-4145-995b-7df1d1299386" (UID: "d613e907-86be-4145-995b-7df1d1299386"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 21:27:01 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:27:01.876376    1444 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d613e907-86be-4145-995b-7df1d1299386-ingress-nginx-token-j5fpj" (OuterVolumeSpecName: "ingress-nginx-token-j5fpj") pod "d613e907-86be-4145-995b-7df1d1299386" (UID: "d613e907-86be-4145-995b-7df1d1299386"). InnerVolumeSpecName "ingress-nginx-token-j5fpj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 21:27:01 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:27:01.973145    1444 reconciler.go:319] Volume detached for volume "ingress-nginx-token-j5fpj" (UniqueName: "kubernetes.io/secret/d613e907-86be-4145-995b-7df1d1299386-ingress-nginx-token-j5fpj") on node "ingress-addon-legacy-306023" DevicePath ""
	Aug 30 21:27:01 ingress-addon-legacy-306023 kubelet[1444]: I0830 21:27:01.973183    1444 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d613e907-86be-4145-995b-7df1d1299386-webhook-cert") on node "ingress-addon-legacy-306023" DevicePath ""
	Aug 30 21:27:02 ingress-addon-legacy-306023 kubelet[1444]: W0830 21:27:02.522988    1444 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/d613e907-86be-4145-995b-7df1d1299386/volumes" does not exist
	
	* 
	* ==> storage-provisioner [ada0722e293eb2414267404ea9ec0e91f748d2f2a41d42b9f7b7d33a573c42a1] <==
	* I0830 21:23:56.158883       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 21:23:56.175070       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 21:23:56.175132       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 21:23:56.205251       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 21:23:56.206464       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-306023_8df414b3-59d4-47d1-9cc1-7322ec3a7370!
	I0830 21:23:56.255668       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2781bf9c-b4f0-4c7e-8b7c-41c3db18907a", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-306023_8df414b3-59d4-47d1-9cc1-7322ec3a7370 became leader
	I0830 21:23:56.423920       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-306023_8df414b3-59d4-47d1-9cc1-7322ec3a7370!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-306023 -n ingress-addon-legacy-306023
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-306023 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (163.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-j4rx4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-j4rx4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-j4rx4 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (191.921843ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-j4rx4): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-mzmpx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-mzmpx -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-mzmpx -- sh -c "ping -c 1 192.168.39.1": exit status 1 (184.436584ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-mzmpx): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-752665 -n multinode-752665
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-752665 logs -n 25: (1.237232732s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-579889 ssh -- ls                    | mount-start-2-579889 | jenkins | v1.31.2 | 30 Aug 23 21:31 UTC | 30 Aug 23 21:31 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-579889 ssh --                       | mount-start-2-579889 | jenkins | v1.31.2 | 30 Aug 23 21:31 UTC | 30 Aug 23 21:31 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-579889                           | mount-start-2-579889 | jenkins | v1.31.2 | 30 Aug 23 21:31 UTC | 30 Aug 23 21:31 UTC |
	| start   | -p mount-start-2-579889                           | mount-start-2-579889 | jenkins | v1.31.2 | 30 Aug 23 21:31 UTC | 30 Aug 23 21:31 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-579889 | jenkins | v1.31.2 | 30 Aug 23 21:31 UTC |                     |
	|         | --profile mount-start-2-579889                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-579889 ssh -- ls                    | mount-start-2-579889 | jenkins | v1.31.2 | 30 Aug 23 21:31 UTC | 30 Aug 23 21:31 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-579889 ssh --                       | mount-start-2-579889 | jenkins | v1.31.2 | 30 Aug 23 21:31 UTC | 30 Aug 23 21:31 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-579889                           | mount-start-2-579889 | jenkins | v1.31.2 | 30 Aug 23 21:31 UTC | 30 Aug 23 21:31 UTC |
	| delete  | -p mount-start-1-549945                           | mount-start-1-549945 | jenkins | v1.31.2 | 30 Aug 23 21:31 UTC | 30 Aug 23 21:31 UTC |
	| start   | -p multinode-752665                               | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:31 UTC | 30 Aug 23 21:33 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- apply -f                   | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- rollout                    | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- get pods -o                | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- get pods -o                | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- exec                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | busybox-5bc68d56bd-j4rx4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- exec                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | busybox-5bc68d56bd-mzmpx --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- exec                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | busybox-5bc68d56bd-j4rx4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- exec                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | busybox-5bc68d56bd-mzmpx --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- exec                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | busybox-5bc68d56bd-j4rx4 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- exec                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | busybox-5bc68d56bd-mzmpx -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- get pods -o                | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- exec                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | busybox-5bc68d56bd-j4rx4                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- exec                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC |                     |
	|         | busybox-5bc68d56bd-j4rx4 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- exec                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC | 30 Aug 23 21:33 UTC |
	|         | busybox-5bc68d56bd-mzmpx                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-752665 -- exec                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:33 UTC |                     |
	|         | busybox-5bc68d56bd-mzmpx -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:31:47
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:31:47.564557  975141 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:31:47.564735  975141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:31:47.564745  975141 out.go:309] Setting ErrFile to fd 2...
	I0830 21:31:47.564752  975141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:31:47.564999  975141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 21:31:47.565607  975141 out.go:303] Setting JSON to false
	I0830 21:31:47.566573  975141 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11655,"bootTime":1693419453,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 21:31:47.566646  975141 start.go:138] virtualization: kvm guest
	I0830 21:31:47.569191  975141 out.go:177] * [multinode-752665] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 21:31:47.570577  975141 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 21:31:47.572006  975141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:31:47.570616  975141 notify.go:220] Checking for updates...
	I0830 21:31:47.574819  975141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:31:47.576350  975141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:31:47.577722  975141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 21:31:47.579098  975141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:31:47.580755  975141 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:31:47.615725  975141 out.go:177] * Using the kvm2 driver based on user configuration
	I0830 21:31:47.617107  975141 start.go:298] selected driver: kvm2
	I0830 21:31:47.617123  975141 start.go:902] validating driver "kvm2" against <nil>
	I0830 21:31:47.617144  975141 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:31:47.618102  975141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:31:47.618222  975141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 21:31:47.633156  975141 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 21:31:47.633208  975141 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 21:31:47.633401  975141 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 21:31:47.633443  975141 cni.go:84] Creating CNI manager for ""
	I0830 21:31:47.633450  975141 cni.go:136] 0 nodes found, recommending kindnet
	I0830 21:31:47.633458  975141 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0830 21:31:47.633467  975141 start_flags.go:319] config:
	{Name:multinode-752665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:31:47.633608  975141 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:31:47.635593  975141 out.go:177] * Starting control plane node multinode-752665 in cluster multinode-752665
	I0830 21:31:47.637190  975141 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:31:47.637226  975141 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0830 21:31:47.637240  975141 cache.go:57] Caching tarball of preloaded images
	I0830 21:31:47.637321  975141 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 21:31:47.637336  975141 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 21:31:47.637708  975141 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:31:47.637734  975141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json: {Name:mkab3c1ab7883d0df4ea51f402009869da38981f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:31:47.637947  975141 start.go:365] acquiring machines lock for multinode-752665: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 21:31:47.637987  975141 start.go:369] acquired machines lock for "multinode-752665" in 20.798µs
	I0830 21:31:47.638011  975141 start.go:93] Provisioning new machine with config: &{Name:multinode-752665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:31:47.638106  975141 start.go:125] createHost starting for "" (driver="kvm2")
	I0830 21:31:47.639893  975141 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0830 21:31:47.640051  975141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:31:47.640102  975141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:31:47.654181  975141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0830 21:31:47.654622  975141 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:31:47.655265  975141 main.go:141] libmachine: Using API Version  1
	I0830 21:31:47.655293  975141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:31:47.655622  975141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:31:47.655793  975141 main.go:141] libmachine: (multinode-752665) Calling .GetMachineName
	I0830 21:31:47.655929  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:31:47.656093  975141 start.go:159] libmachine.API.Create for "multinode-752665" (driver="kvm2")
	I0830 21:31:47.656131  975141 client.go:168] LocalClient.Create starting
	I0830 21:31:47.656176  975141 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem
	I0830 21:31:47.656223  975141 main.go:141] libmachine: Decoding PEM data...
	I0830 21:31:47.656249  975141 main.go:141] libmachine: Parsing certificate...
	I0830 21:31:47.656324  975141 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem
	I0830 21:31:47.656350  975141 main.go:141] libmachine: Decoding PEM data...
	I0830 21:31:47.656367  975141 main.go:141] libmachine: Parsing certificate...
	I0830 21:31:47.656388  975141 main.go:141] libmachine: Running pre-create checks...
	I0830 21:31:47.656408  975141 main.go:141] libmachine: (multinode-752665) Calling .PreCreateCheck
	I0830 21:31:47.656782  975141 main.go:141] libmachine: (multinode-752665) Calling .GetConfigRaw
	I0830 21:31:47.657214  975141 main.go:141] libmachine: Creating machine...
	I0830 21:31:47.657234  975141 main.go:141] libmachine: (multinode-752665) Calling .Create
	I0830 21:31:47.657354  975141 main.go:141] libmachine: (multinode-752665) Creating KVM machine...
	I0830 21:31:47.658532  975141 main.go:141] libmachine: (multinode-752665) DBG | found existing default KVM network
	I0830 21:31:47.659309  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:47.659131  975164 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d790}
	I0830 21:31:47.664521  975141 main.go:141] libmachine: (multinode-752665) DBG | trying to create private KVM network mk-multinode-752665 192.168.39.0/24...
	I0830 21:31:47.735562  975141 main.go:141] libmachine: (multinode-752665) DBG | private KVM network mk-multinode-752665 192.168.39.0/24 created
	I0830 21:31:47.735596  975141 main.go:141] libmachine: (multinode-752665) Setting up store path in /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665 ...
	I0830 21:31:47.735613  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:47.735502  975164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:31:47.735627  975141 main.go:141] libmachine: (multinode-752665) Building disk image from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 21:31:47.735674  975141 main.go:141] libmachine: (multinode-752665) Downloading /home/jenkins/minikube-integration/17114-955377/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0830 21:31:47.957942  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:47.957796  975164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa...
	I0830 21:31:48.133325  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:48.133178  975164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/multinode-752665.rawdisk...
	I0830 21:31:48.133374  975141 main.go:141] libmachine: (multinode-752665) DBG | Writing magic tar header
	I0830 21:31:48.133392  975141 main.go:141] libmachine: (multinode-752665) DBG | Writing SSH key tar header
	I0830 21:31:48.133406  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:48.133335  975164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665 ...
	I0830 21:31:48.133508  975141 main.go:141] libmachine: (multinode-752665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665
	I0830 21:31:48.133541  975141 main.go:141] libmachine: (multinode-752665) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665 (perms=drwx------)
	I0830 21:31:48.133549  975141 main.go:141] libmachine: (multinode-752665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines
	I0830 21:31:48.133560  975141 main.go:141] libmachine: (multinode-752665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:31:48.133571  975141 main.go:141] libmachine: (multinode-752665) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines (perms=drwxr-xr-x)
	I0830 21:31:48.133577  975141 main.go:141] libmachine: (multinode-752665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377
	I0830 21:31:48.133592  975141 main.go:141] libmachine: (multinode-752665) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube (perms=drwxr-xr-x)
	I0830 21:31:48.133601  975141 main.go:141] libmachine: (multinode-752665) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0830 21:31:48.133610  975141 main.go:141] libmachine: (multinode-752665) DBG | Checking permissions on dir: /home/jenkins
	I0830 21:31:48.133616  975141 main.go:141] libmachine: (multinode-752665) DBG | Checking permissions on dir: /home
	I0830 21:31:48.133625  975141 main.go:141] libmachine: (multinode-752665) DBG | Skipping /home - not owner
	I0830 21:31:48.133632  975141 main.go:141] libmachine: (multinode-752665) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377 (perms=drwxrwxr-x)
	I0830 21:31:48.133648  975141 main.go:141] libmachine: (multinode-752665) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0830 21:31:48.133656  975141 main.go:141] libmachine: (multinode-752665) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0830 21:31:48.133663  975141 main.go:141] libmachine: (multinode-752665) Creating domain...
	I0830 21:31:48.134784  975141 main.go:141] libmachine: (multinode-752665) define libvirt domain using xml: 
	I0830 21:31:48.134814  975141 main.go:141] libmachine: (multinode-752665) <domain type='kvm'>
	I0830 21:31:48.134826  975141 main.go:141] libmachine: (multinode-752665)   <name>multinode-752665</name>
	I0830 21:31:48.134843  975141 main.go:141] libmachine: (multinode-752665)   <memory unit='MiB'>2200</memory>
	I0830 21:31:48.134851  975141 main.go:141] libmachine: (multinode-752665)   <vcpu>2</vcpu>
	I0830 21:31:48.134862  975141 main.go:141] libmachine: (multinode-752665)   <features>
	I0830 21:31:48.134868  975141 main.go:141] libmachine: (multinode-752665)     <acpi/>
	I0830 21:31:48.134880  975141 main.go:141] libmachine: (multinode-752665)     <apic/>
	I0830 21:31:48.134961  975141 main.go:141] libmachine: (multinode-752665)     <pae/>
	I0830 21:31:48.135015  975141 main.go:141] libmachine: (multinode-752665)     
	I0830 21:31:48.135032  975141 main.go:141] libmachine: (multinode-752665)   </features>
	I0830 21:31:48.135046  975141 main.go:141] libmachine: (multinode-752665)   <cpu mode='host-passthrough'>
	I0830 21:31:48.135059  975141 main.go:141] libmachine: (multinode-752665)   
	I0830 21:31:48.135071  975141 main.go:141] libmachine: (multinode-752665)   </cpu>
	I0830 21:31:48.135083  975141 main.go:141] libmachine: (multinode-752665)   <os>
	I0830 21:31:48.135102  975141 main.go:141] libmachine: (multinode-752665)     <type>hvm</type>
	I0830 21:31:48.135114  975141 main.go:141] libmachine: (multinode-752665)     <boot dev='cdrom'/>
	I0830 21:31:48.135125  975141 main.go:141] libmachine: (multinode-752665)     <boot dev='hd'/>
	I0830 21:31:48.135138  975141 main.go:141] libmachine: (multinode-752665)     <bootmenu enable='no'/>
	I0830 21:31:48.135150  975141 main.go:141] libmachine: (multinode-752665)   </os>
	I0830 21:31:48.135158  975141 main.go:141] libmachine: (multinode-752665)   <devices>
	I0830 21:31:48.135172  975141 main.go:141] libmachine: (multinode-752665)     <disk type='file' device='cdrom'>
	I0830 21:31:48.135297  975141 main.go:141] libmachine: (multinode-752665)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/boot2docker.iso'/>
	I0830 21:31:48.135338  975141 main.go:141] libmachine: (multinode-752665)       <target dev='hdc' bus='scsi'/>
	I0830 21:31:48.135354  975141 main.go:141] libmachine: (multinode-752665)       <readonly/>
	I0830 21:31:48.135368  975141 main.go:141] libmachine: (multinode-752665)     </disk>
	I0830 21:31:48.135385  975141 main.go:141] libmachine: (multinode-752665)     <disk type='file' device='disk'>
	I0830 21:31:48.135407  975141 main.go:141] libmachine: (multinode-752665)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0830 21:31:48.135427  975141 main.go:141] libmachine: (multinode-752665)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/multinode-752665.rawdisk'/>
	I0830 21:31:48.135442  975141 main.go:141] libmachine: (multinode-752665)       <target dev='hda' bus='virtio'/>
	I0830 21:31:48.135462  975141 main.go:141] libmachine: (multinode-752665)     </disk>
	I0830 21:31:48.135476  975141 main.go:141] libmachine: (multinode-752665)     <interface type='network'>
	I0830 21:31:48.135494  975141 main.go:141] libmachine: (multinode-752665)       <source network='mk-multinode-752665'/>
	I0830 21:31:48.135509  975141 main.go:141] libmachine: (multinode-752665)       <model type='virtio'/>
	I0830 21:31:48.135521  975141 main.go:141] libmachine: (multinode-752665)     </interface>
	I0830 21:31:48.135535  975141 main.go:141] libmachine: (multinode-752665)     <interface type='network'>
	I0830 21:31:48.135549  975141 main.go:141] libmachine: (multinode-752665)       <source network='default'/>
	I0830 21:31:48.135562  975141 main.go:141] libmachine: (multinode-752665)       <model type='virtio'/>
	I0830 21:31:48.135576  975141 main.go:141] libmachine: (multinode-752665)     </interface>
	I0830 21:31:48.135588  975141 main.go:141] libmachine: (multinode-752665)     <serial type='pty'>
	I0830 21:31:48.135602  975141 main.go:141] libmachine: (multinode-752665)       <target port='0'/>
	I0830 21:31:48.135614  975141 main.go:141] libmachine: (multinode-752665)     </serial>
	I0830 21:31:48.135628  975141 main.go:141] libmachine: (multinode-752665)     <console type='pty'>
	I0830 21:31:48.135646  975141 main.go:141] libmachine: (multinode-752665)       <target type='serial' port='0'/>
	I0830 21:31:48.135665  975141 main.go:141] libmachine: (multinode-752665)     </console>
	I0830 21:31:48.135678  975141 main.go:141] libmachine: (multinode-752665)     <rng model='virtio'>
	I0830 21:31:48.135694  975141 main.go:141] libmachine: (multinode-752665)       <backend model='random'>/dev/random</backend>
	I0830 21:31:48.135706  975141 main.go:141] libmachine: (multinode-752665)     </rng>
	I0830 21:31:48.135729  975141 main.go:141] libmachine: (multinode-752665)     
	I0830 21:31:48.135754  975141 main.go:141] libmachine: (multinode-752665)     
	I0830 21:31:48.135784  975141 main.go:141] libmachine: (multinode-752665)   </devices>
	I0830 21:31:48.135794  975141 main.go:141] libmachine: (multinode-752665) </domain>
	I0830 21:31:48.135822  975141 main.go:141] libmachine: (multinode-752665) 
	I0830 21:31:48.140157  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:45:9b:bd in network default
	I0830 21:31:48.140730  975141 main.go:141] libmachine: (multinode-752665) Ensuring networks are active...
	I0830 21:31:48.140751  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:48.141312  975141 main.go:141] libmachine: (multinode-752665) Ensuring network default is active
	I0830 21:31:48.141641  975141 main.go:141] libmachine: (multinode-752665) Ensuring network mk-multinode-752665 is active
	I0830 21:31:48.142077  975141 main.go:141] libmachine: (multinode-752665) Getting domain xml...
	I0830 21:31:48.142676  975141 main.go:141] libmachine: (multinode-752665) Creating domain...
	I0830 21:31:49.353153  975141 main.go:141] libmachine: (multinode-752665) Waiting to get IP...
	I0830 21:31:49.353988  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:49.354344  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:49.354392  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:49.354336  975164 retry.go:31] will retry after 297.59969ms: waiting for machine to come up
	I0830 21:31:49.653810  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:49.654246  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:49.654276  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:49.654168  975164 retry.go:31] will retry after 239.78264ms: waiting for machine to come up
	I0830 21:31:49.895616  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:49.895974  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:49.896002  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:49.895926  975164 retry.go:31] will retry after 302.263882ms: waiting for machine to come up
	I0830 21:31:50.199468  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:50.199918  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:50.199951  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:50.199862  975164 retry.go:31] will retry after 398.740787ms: waiting for machine to come up
	I0830 21:31:50.600512  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:50.600957  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:50.600985  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:50.600923  975164 retry.go:31] will retry after 716.687724ms: waiting for machine to come up
	I0830 21:31:51.318742  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:51.319025  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:51.319056  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:51.318981  975164 retry.go:31] will retry after 825.449858ms: waiting for machine to come up
	I0830 21:31:52.146074  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:52.146431  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:52.146456  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:52.146378  975164 retry.go:31] will retry after 883.644787ms: waiting for machine to come up
	I0830 21:31:53.031956  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:53.032373  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:53.032398  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:53.032333  975164 retry.go:31] will retry after 1.262520271s: waiting for machine to come up
	I0830 21:31:54.296772  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:54.297145  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:54.297178  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:54.297088  975164 retry.go:31] will retry after 1.740885834s: waiting for machine to come up
	I0830 21:31:56.040104  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:56.040460  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:56.040487  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:56.040430  975164 retry.go:31] will retry after 2.150942293s: waiting for machine to come up
	I0830 21:31:58.192630  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:58.192994  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:58.193027  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:58.192943  975164 retry.go:31] will retry after 1.773502327s: waiting for machine to come up
	I0830 21:31:59.969082  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:31:59.969487  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:31:59.969517  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:31:59.969447  975164 retry.go:31] will retry after 3.041975682s: waiting for machine to come up
	I0830 21:32:03.013186  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:03.013667  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:32:03.013694  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:32:03.013603  975164 retry.go:31] will retry after 4.177183553s: waiting for machine to come up
	I0830 21:32:07.195880  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:07.196301  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:32:07.196327  975141 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:32:07.196265  975164 retry.go:31] will retry after 5.654056461s: waiting for machine to come up
	I0830 21:32:12.852438  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:12.852860  975141 main.go:141] libmachine: (multinode-752665) Found IP for machine: 192.168.39.20
	I0830 21:32:12.852892  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has current primary IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:12.852904  975141 main.go:141] libmachine: (multinode-752665) Reserving static IP address...
	I0830 21:32:12.853224  975141 main.go:141] libmachine: (multinode-752665) DBG | unable to find host DHCP lease matching {name: "multinode-752665", mac: "52:54:00:73:23:77", ip: "192.168.39.20"} in network mk-multinode-752665
	I0830 21:32:12.923654  975141 main.go:141] libmachine: (multinode-752665) DBG | Getting to WaitForSSH function...
	I0830 21:32:12.923689  975141 main.go:141] libmachine: (multinode-752665) Reserved static IP address: 192.168.39.20
	I0830 21:32:12.923703  975141 main.go:141] libmachine: (multinode-752665) Waiting for SSH to be available...
	I0830 21:32:12.926303  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:12.926682  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:minikube Clientid:01:52:54:00:73:23:77}
	I0830 21:32:12.926716  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:12.926852  975141 main.go:141] libmachine: (multinode-752665) DBG | Using SSH client type: external
	I0830 21:32:12.926892  975141 main.go:141] libmachine: (multinode-752665) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa (-rw-------)
	I0830 21:32:12.926934  975141 main.go:141] libmachine: (multinode-752665) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.20 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 21:32:12.926958  975141 main.go:141] libmachine: (multinode-752665) DBG | About to run SSH command:
	I0830 21:32:12.926970  975141 main.go:141] libmachine: (multinode-752665) DBG | exit 0
	I0830 21:32:13.015685  975141 main.go:141] libmachine: (multinode-752665) DBG | SSH cmd err, output: <nil>: 
	I0830 21:32:13.016004  975141 main.go:141] libmachine: (multinode-752665) KVM machine creation complete!
	I0830 21:32:13.016303  975141 main.go:141] libmachine: (multinode-752665) Calling .GetConfigRaw
	I0830 21:32:13.016911  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:32:13.017109  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:32:13.017312  975141 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0830 21:32:13.017329  975141 main.go:141] libmachine: (multinode-752665) Calling .GetState
	I0830 21:32:13.018464  975141 main.go:141] libmachine: Detecting operating system of created instance...
	I0830 21:32:13.018481  975141 main.go:141] libmachine: Waiting for SSH to be available...
	I0830 21:32:13.018490  975141 main.go:141] libmachine: Getting to WaitForSSH function...
	I0830 21:32:13.018500  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:13.020883  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.021261  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:13.021304  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.021380  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:13.021561  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.021721  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.021873  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:13.022053  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:32:13.022505  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:32:13.022520  975141 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0830 21:32:13.135069  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:32:13.135091  975141 main.go:141] libmachine: Detecting the provisioner...
	I0830 21:32:13.135101  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:13.137845  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.138180  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:13.138208  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.138360  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:13.138565  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.138720  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.138866  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:13.139033  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:32:13.139450  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:32:13.139462  975141 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0830 21:32:13.252956  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0830 21:32:13.253020  975141 main.go:141] libmachine: found compatible host: buildroot
	I0830 21:32:13.253034  975141 main.go:141] libmachine: Provisioning with buildroot...
	I0830 21:32:13.253046  975141 main.go:141] libmachine: (multinode-752665) Calling .GetMachineName
	I0830 21:32:13.253289  975141 buildroot.go:166] provisioning hostname "multinode-752665"
	I0830 21:32:13.253315  975141 main.go:141] libmachine: (multinode-752665) Calling .GetMachineName
	I0830 21:32:13.253453  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:13.256385  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.256761  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:13.256795  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.256951  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:13.257133  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.257298  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.257469  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:13.257629  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:32:13.258011  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:32:13.258031  975141 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-752665 && echo "multinode-752665" | sudo tee /etc/hostname
	I0830 21:32:13.386627  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-752665
	
	I0830 21:32:13.386663  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:13.389201  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.389539  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:13.389596  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.389736  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:13.389933  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.390067  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.390191  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:13.390348  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:32:13.390738  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:32:13.390754  975141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-752665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-752665/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-752665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:32:13.514103  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:32:13.514138  975141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 21:32:13.514165  975141 buildroot.go:174] setting up certificates
	I0830 21:32:13.514177  975141 provision.go:83] configureAuth start
	I0830 21:32:13.514187  975141 main.go:141] libmachine: (multinode-752665) Calling .GetMachineName
	I0830 21:32:13.514506  975141 main.go:141] libmachine: (multinode-752665) Calling .GetIP
	I0830 21:32:13.517099  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.517402  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:13.517433  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.517550  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:13.519415  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.519694  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:13.519717  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.519855  975141 provision.go:138] copyHostCerts
	I0830 21:32:13.519906  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:32:13.519962  975141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 21:32:13.519980  975141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:32:13.520029  975141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 21:32:13.520132  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:32:13.520157  975141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 21:32:13.520163  975141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:32:13.520192  975141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 21:32:13.520267  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:32:13.520293  975141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 21:32:13.520299  975141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:32:13.520330  975141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 21:32:13.520394  975141 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.multinode-752665 san=[192.168.39.20 192.168.39.20 localhost 127.0.0.1 minikube multinode-752665]
	I0830 21:32:13.724076  975141 provision.go:172] copyRemoteCerts
	I0830 21:32:13.724134  975141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:32:13.724162  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:13.726466  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.726755  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:13.726787  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.726986  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:13.727184  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.727377  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:13.727482  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:32:13.813298  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 21:32:13.813366  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 21:32:13.840107  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 21:32:13.840190  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0830 21:32:13.865990  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 21:32:13.866062  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 21:32:13.891886  975141 provision.go:86] duration metric: configureAuth took 377.697969ms
	I0830 21:32:13.891916  975141 buildroot.go:189] setting minikube options for container-runtime
	I0830 21:32:13.892154  975141 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:32:13.892277  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:13.894586  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.894957  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:13.894992  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:13.895136  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:13.895276  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.895445  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:13.895619  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:13.895792  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:32:13.896199  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:32:13.896217  975141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:32:14.232199  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:32:14.232233  975141 main.go:141] libmachine: Checking connection to Docker...
	I0830 21:32:14.232246  975141 main.go:141] libmachine: (multinode-752665) Calling .GetURL
	I0830 21:32:14.233513  975141 main.go:141] libmachine: (multinode-752665) DBG | Using libvirt version 6000000
	I0830 21:32:14.235938  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.236202  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:14.236224  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.236376  975141 main.go:141] libmachine: Docker is up and running!
	I0830 21:32:14.236398  975141 main.go:141] libmachine: Reticulating splines...
	I0830 21:32:14.236408  975141 client.go:171] LocalClient.Create took 26.580263197s
	I0830 21:32:14.236451  975141 start.go:167] duration metric: libmachine.API.Create for "multinode-752665" took 26.580347745s
	I0830 21:32:14.236467  975141 start.go:300] post-start starting for "multinode-752665" (driver="kvm2")
	I0830 21:32:14.236481  975141 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:32:14.236506  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:32:14.236748  975141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:32:14.236773  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:14.238601  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.238851  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:14.238882  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.239029  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:14.239225  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:14.239405  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:14.239550  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:32:14.326160  975141 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:32:14.330185  975141 command_runner.go:130] > NAME=Buildroot
	I0830 21:32:14.330202  975141 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0830 21:32:14.330208  975141 command_runner.go:130] > ID=buildroot
	I0830 21:32:14.330217  975141 command_runner.go:130] > VERSION_ID=2021.02.12
	I0830 21:32:14.330223  975141 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0830 21:32:14.330402  975141 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 21:32:14.330428  975141 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 21:32:14.330499  975141 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 21:32:14.330616  975141 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 21:32:14.330626  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /etc/ssl/certs/9626212.pem
	I0830 21:32:14.330743  975141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 21:32:14.340069  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:32:14.362036  975141 start.go:303] post-start completed in 125.553909ms
	I0830 21:32:14.362085  975141 main.go:141] libmachine: (multinode-752665) Calling .GetConfigRaw
	I0830 21:32:14.362707  975141 main.go:141] libmachine: (multinode-752665) Calling .GetIP
	I0830 21:32:14.365010  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.365405  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:14.365443  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.365748  975141 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:32:14.365905  975141 start.go:128] duration metric: createHost completed in 26.72778993s
	I0830 21:32:14.365928  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:14.367951  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.368217  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:14.368242  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.368361  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:14.368547  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:14.368689  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:14.368841  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:14.369000  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:32:14.369385  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:32:14.369395  975141 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 21:32:14.484333  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693431134.469673003
	
	I0830 21:32:14.484358  975141 fix.go:206] guest clock: 1693431134.469673003
	I0830 21:32:14.484365  975141 fix.go:219] Guest: 2023-08-30 21:32:14.469673003 +0000 UTC Remote: 2023-08-30 21:32:14.365917298 +0000 UTC m=+26.849998969 (delta=103.755705ms)
	I0830 21:32:14.484396  975141 fix.go:190] guest clock delta is within tolerance: 103.755705ms
	I0830 21:32:14.484401  975141 start.go:83] releasing machines lock for "multinode-752665", held for 26.846403302s
	I0830 21:32:14.484420  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:32:14.484693  975141 main.go:141] libmachine: (multinode-752665) Calling .GetIP
	I0830 21:32:14.487154  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.487468  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:14.487500  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.487626  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:32:14.488155  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:32:14.488332  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:32:14.488424  975141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:32:14.488475  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:14.488589  975141 ssh_runner.go:195] Run: cat /version.json
	I0830 21:32:14.488621  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:14.490980  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.491225  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.491333  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:14.491371  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.491444  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:14.491558  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:14.491599  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:14.491636  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:14.491725  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:14.491826  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:14.491910  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:14.491970  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:32:14.492001  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:14.492100  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:32:14.596673  975141 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0830 21:32:14.597520  975141 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1692613578-17086", "minikube_version": "v1.31.2", "commit": "9dc31f0284dc1a8a35859648c60120733f0f8296"}
	I0830 21:32:14.597664  975141 ssh_runner.go:195] Run: systemctl --version
	I0830 21:32:14.602799  975141 command_runner.go:130] > systemd 247 (247)
	I0830 21:32:14.602821  975141 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0830 21:32:14.603141  975141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:32:14.764280  975141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 21:32:14.770005  975141 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0830 21:32:14.770053  975141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 21:32:14.770131  975141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:32:14.784581  975141 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0830 21:32:14.784673  975141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 21:32:14.784688  975141 start.go:466] detecting cgroup driver to use...
	I0830 21:32:14.784757  975141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:32:14.797362  975141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:32:14.808526  975141 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:32:14.808589  975141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:32:14.820050  975141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:32:14.831944  975141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:32:14.844739  975141 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0830 21:32:14.931807  975141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:32:15.051520  975141 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0830 21:32:15.051558  975141 docker.go:212] disabling docker service ...
	I0830 21:32:15.051625  975141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:32:15.064349  975141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:32:15.075460  975141 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0830 21:32:15.075571  975141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:32:15.189767  975141 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0830 21:32:15.189842  975141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:32:15.299628  975141 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0830 21:32:15.299685  975141 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0830 21:32:15.299749  975141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:32:15.311607  975141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:32:15.328583  975141 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0830 21:32:15.328631  975141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 21:32:15.328699  975141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:32:15.337596  975141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:32:15.337661  975141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:32:15.346859  975141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:32:15.355862  975141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:32:15.364752  975141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:32:15.373791  975141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:32:15.381584  975141 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 21:32:15.381628  975141 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 21:32:15.381688  975141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 21:32:15.393793  975141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:32:15.401708  975141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:32:15.509398  975141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:32:15.680108  975141 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:32:15.680216  975141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:32:15.685324  975141 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0830 21:32:15.685350  975141 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0830 21:32:15.685360  975141 command_runner.go:130] > Device: 16h/22d	Inode: 723         Links: 1
	I0830 21:32:15.685370  975141 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 21:32:15.685378  975141 command_runner.go:130] > Access: 2023-08-30 21:32:15.652460565 +0000
	I0830 21:32:15.685388  975141 command_runner.go:130] > Modify: 2023-08-30 21:32:15.652460565 +0000
	I0830 21:32:15.685400  975141 command_runner.go:130] > Change: 2023-08-30 21:32:15.652460565 +0000
	I0830 21:32:15.685408  975141 command_runner.go:130] >  Birth: -
	I0830 21:32:15.685720  975141 start.go:534] Will wait 60s for crictl version
	I0830 21:32:15.685786  975141 ssh_runner.go:195] Run: which crictl
	I0830 21:32:15.689226  975141 command_runner.go:130] > /usr/bin/crictl
	I0830 21:32:15.689474  975141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:32:15.717955  975141 command_runner.go:130] > Version:  0.1.0
	I0830 21:32:15.717978  975141 command_runner.go:130] > RuntimeName:  cri-o
	I0830 21:32:15.717983  975141 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0830 21:32:15.717990  975141 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0830 21:32:15.719127  975141 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 21:32:15.719202  975141 ssh_runner.go:195] Run: crio --version
	I0830 21:32:15.762346  975141 command_runner.go:130] > crio version 1.24.1
	I0830 21:32:15.762373  975141 command_runner.go:130] > Version:          1.24.1
	I0830 21:32:15.762384  975141 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0830 21:32:15.762392  975141 command_runner.go:130] > GitTreeState:     dirty
	I0830 21:32:15.762416  975141 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0830 21:32:15.762435  975141 command_runner.go:130] > GoVersion:        go1.19.9
	I0830 21:32:15.762446  975141 command_runner.go:130] > Compiler:         gc
	I0830 21:32:15.762453  975141 command_runner.go:130] > Platform:         linux/amd64
	I0830 21:32:15.762467  975141 command_runner.go:130] > Linkmode:         dynamic
	I0830 21:32:15.762480  975141 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 21:32:15.762490  975141 command_runner.go:130] > SeccompEnabled:   true
	I0830 21:32:15.762496  975141 command_runner.go:130] > AppArmorEnabled:  false
	I0830 21:32:15.762593  975141 ssh_runner.go:195] Run: crio --version
	I0830 21:32:15.801974  975141 command_runner.go:130] > crio version 1.24.1
	I0830 21:32:15.802004  975141 command_runner.go:130] > Version:          1.24.1
	I0830 21:32:15.802013  975141 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0830 21:32:15.802017  975141 command_runner.go:130] > GitTreeState:     dirty
	I0830 21:32:15.802023  975141 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0830 21:32:15.802045  975141 command_runner.go:130] > GoVersion:        go1.19.9
	I0830 21:32:15.802054  975141 command_runner.go:130] > Compiler:         gc
	I0830 21:32:15.802062  975141 command_runner.go:130] > Platform:         linux/amd64
	I0830 21:32:15.802076  975141 command_runner.go:130] > Linkmode:         dynamic
	I0830 21:32:15.802092  975141 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 21:32:15.802101  975141 command_runner.go:130] > SeccompEnabled:   true
	I0830 21:32:15.802105  975141 command_runner.go:130] > AppArmorEnabled:  false
	I0830 21:32:15.805910  975141 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 21:32:15.807400  975141 main.go:141] libmachine: (multinode-752665) Calling .GetIP
	I0830 21:32:15.810208  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:15.810555  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:15.810587  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:15.810817  975141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 21:32:15.814925  975141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:32:15.826852  975141 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:32:15.826926  975141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:32:15.856876  975141 command_runner.go:130] > {
	I0830 21:32:15.856901  975141 command_runner.go:130] >   "images": [
	I0830 21:32:15.856905  975141 command_runner.go:130] >   ]
	I0830 21:32:15.856909  975141 command_runner.go:130] > }
	I0830 21:32:15.858260  975141 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 21:32:15.858349  975141 ssh_runner.go:195] Run: which lz4
	I0830 21:32:15.862525  975141 command_runner.go:130] > /usr/bin/lz4
	I0830 21:32:15.862731  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0830 21:32:15.862831  975141 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 21:32:15.867274  975141 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 21:32:15.867322  975141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 21:32:15.867353  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 21:32:17.661098  975141 crio.go:444] Took 1.798302 seconds to copy over tarball
	I0830 21:32:17.661173  975141 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 21:32:20.571411  975141 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.910209305s)
	I0830 21:32:20.571440  975141 crio.go:451] Took 2.910316 seconds to extract the tarball
	I0830 21:32:20.571450  975141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 21:32:20.612512  975141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:32:20.666792  975141 command_runner.go:130] > {
	I0830 21:32:20.666817  975141 command_runner.go:130] >   "images": [
	I0830 21:32:20.666821  975141 command_runner.go:130] >     {
	I0830 21:32:20.666834  975141 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0830 21:32:20.666839  975141 command_runner.go:130] >       "repoTags": [
	I0830 21:32:20.666845  975141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0830 21:32:20.666849  975141 command_runner.go:130] >       ],
	I0830 21:32:20.666853  975141 command_runner.go:130] >       "repoDigests": [
	I0830 21:32:20.666861  975141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0830 21:32:20.666867  975141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0830 21:32:20.666870  975141 command_runner.go:130] >       ],
	I0830 21:32:20.666878  975141 command_runner.go:130] >       "size": "65249302",
	I0830 21:32:20.666887  975141 command_runner.go:130] >       "uid": null,
	I0830 21:32:20.666893  975141 command_runner.go:130] >       "username": "",
	I0830 21:32:20.666900  975141 command_runner.go:130] >       "spec": null
	I0830 21:32:20.666909  975141 command_runner.go:130] >     },
	I0830 21:32:20.666915  975141 command_runner.go:130] >     {
	I0830 21:32:20.666930  975141 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0830 21:32:20.666936  975141 command_runner.go:130] >       "repoTags": [
	I0830 21:32:20.666943  975141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0830 21:32:20.666946  975141 command_runner.go:130] >       ],
	I0830 21:32:20.666953  975141 command_runner.go:130] >       "repoDigests": [
	I0830 21:32:20.666961  975141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0830 21:32:20.666968  975141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0830 21:32:20.666974  975141 command_runner.go:130] >       ],
	I0830 21:32:20.666978  975141 command_runner.go:130] >       "size": "31470524",
	I0830 21:32:20.666982  975141 command_runner.go:130] >       "uid": null,
	I0830 21:32:20.666994  975141 command_runner.go:130] >       "username": "",
	I0830 21:32:20.666998  975141 command_runner.go:130] >       "spec": null
	I0830 21:32:20.667001  975141 command_runner.go:130] >     },
	I0830 21:32:20.667004  975141 command_runner.go:130] >     {
	I0830 21:32:20.667010  975141 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0830 21:32:20.667015  975141 command_runner.go:130] >       "repoTags": [
	I0830 21:32:20.667019  975141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0830 21:32:20.667023  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667029  975141 command_runner.go:130] >       "repoDigests": [
	I0830 21:32:20.667036  975141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0830 21:32:20.667043  975141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0830 21:32:20.667049  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667056  975141 command_runner.go:130] >       "size": "53621675",
	I0830 21:32:20.667060  975141 command_runner.go:130] >       "uid": null,
	I0830 21:32:20.667065  975141 command_runner.go:130] >       "username": "",
	I0830 21:32:20.667071  975141 command_runner.go:130] >       "spec": null
	I0830 21:32:20.667075  975141 command_runner.go:130] >     },
	I0830 21:32:20.667081  975141 command_runner.go:130] >     {
	I0830 21:32:20.667086  975141 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0830 21:32:20.667090  975141 command_runner.go:130] >       "repoTags": [
	I0830 21:32:20.667095  975141 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0830 21:32:20.667098  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667102  975141 command_runner.go:130] >       "repoDigests": [
	I0830 21:32:20.667109  975141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0830 21:32:20.667118  975141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0830 21:32:20.667122  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667126  975141 command_runner.go:130] >       "size": "295456551",
	I0830 21:32:20.667130  975141 command_runner.go:130] >       "uid": {
	I0830 21:32:20.667134  975141 command_runner.go:130] >         "value": "0"
	I0830 21:32:20.667144  975141 command_runner.go:130] >       },
	I0830 21:32:20.667152  975141 command_runner.go:130] >       "username": "",
	I0830 21:32:20.667156  975141 command_runner.go:130] >       "spec": null
	I0830 21:32:20.667160  975141 command_runner.go:130] >     },
	I0830 21:32:20.667163  975141 command_runner.go:130] >     {
	I0830 21:32:20.667170  975141 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0830 21:32:20.667174  975141 command_runner.go:130] >       "repoTags": [
	I0830 21:32:20.667178  975141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0830 21:32:20.667182  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667186  975141 command_runner.go:130] >       "repoDigests": [
	I0830 21:32:20.667193  975141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0830 21:32:20.667202  975141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0830 21:32:20.667205  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667210  975141 command_runner.go:130] >       "size": "126972880",
	I0830 21:32:20.667213  975141 command_runner.go:130] >       "uid": {
	I0830 21:32:20.667217  975141 command_runner.go:130] >         "value": "0"
	I0830 21:32:20.667220  975141 command_runner.go:130] >       },
	I0830 21:32:20.667224  975141 command_runner.go:130] >       "username": "",
	I0830 21:32:20.667228  975141 command_runner.go:130] >       "spec": null
	I0830 21:32:20.667235  975141 command_runner.go:130] >     },
	I0830 21:32:20.667241  975141 command_runner.go:130] >     {
	I0830 21:32:20.667247  975141 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0830 21:32:20.667251  975141 command_runner.go:130] >       "repoTags": [
	I0830 21:32:20.667256  975141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0830 21:32:20.667263  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667267  975141 command_runner.go:130] >       "repoDigests": [
	I0830 21:32:20.667276  975141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0830 21:32:20.667283  975141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0830 21:32:20.667293  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667297  975141 command_runner.go:130] >       "size": "123163446",
	I0830 21:32:20.667302  975141 command_runner.go:130] >       "uid": {
	I0830 21:32:20.667306  975141 command_runner.go:130] >         "value": "0"
	I0830 21:32:20.667312  975141 command_runner.go:130] >       },
	I0830 21:32:20.667316  975141 command_runner.go:130] >       "username": "",
	I0830 21:32:20.667320  975141 command_runner.go:130] >       "spec": null
	I0830 21:32:20.667323  975141 command_runner.go:130] >     },
	I0830 21:32:20.667326  975141 command_runner.go:130] >     {
	I0830 21:32:20.667334  975141 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0830 21:32:20.667341  975141 command_runner.go:130] >       "repoTags": [
	I0830 21:32:20.667345  975141 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0830 21:32:20.667350  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667354  975141 command_runner.go:130] >       "repoDigests": [
	I0830 21:32:20.667363  975141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0830 21:32:20.667370  975141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0830 21:32:20.667375  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667379  975141 command_runner.go:130] >       "size": "74680215",
	I0830 21:32:20.667383  975141 command_runner.go:130] >       "uid": null,
	I0830 21:32:20.667387  975141 command_runner.go:130] >       "username": "",
	I0830 21:32:20.667392  975141 command_runner.go:130] >       "spec": null
	I0830 21:32:20.667396  975141 command_runner.go:130] >     },
	I0830 21:32:20.667402  975141 command_runner.go:130] >     {
	I0830 21:32:20.667407  975141 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0830 21:32:20.667414  975141 command_runner.go:130] >       "repoTags": [
	I0830 21:32:20.667419  975141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0830 21:32:20.667424  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667431  975141 command_runner.go:130] >       "repoDigests": [
	I0830 21:32:20.667440  975141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0830 21:32:20.667488  975141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0830 21:32:20.667499  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667503  975141 command_runner.go:130] >       "size": "61477686",
	I0830 21:32:20.667506  975141 command_runner.go:130] >       "uid": {
	I0830 21:32:20.667510  975141 command_runner.go:130] >         "value": "0"
	I0830 21:32:20.667513  975141 command_runner.go:130] >       },
	I0830 21:32:20.667517  975141 command_runner.go:130] >       "username": "",
	I0830 21:32:20.667521  975141 command_runner.go:130] >       "spec": null
	I0830 21:32:20.667524  975141 command_runner.go:130] >     },
	I0830 21:32:20.667528  975141 command_runner.go:130] >     {
	I0830 21:32:20.667534  975141 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0830 21:32:20.667540  975141 command_runner.go:130] >       "repoTags": [
	I0830 21:32:20.667544  975141 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0830 21:32:20.667548  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667552  975141 command_runner.go:130] >       "repoDigests": [
	I0830 21:32:20.667558  975141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0830 21:32:20.667570  975141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0830 21:32:20.667575  975141 command_runner.go:130] >       ],
	I0830 21:32:20.667580  975141 command_runner.go:130] >       "size": "750414",
	I0830 21:32:20.667584  975141 command_runner.go:130] >       "uid": {
	I0830 21:32:20.667588  975141 command_runner.go:130] >         "value": "65535"
	I0830 21:32:20.667594  975141 command_runner.go:130] >       },
	I0830 21:32:20.667597  975141 command_runner.go:130] >       "username": "",
	I0830 21:32:20.667601  975141 command_runner.go:130] >       "spec": null
	I0830 21:32:20.667605  975141 command_runner.go:130] >     }
	I0830 21:32:20.667608  975141 command_runner.go:130] >   ]
	I0830 21:32:20.667620  975141 command_runner.go:130] > }
	I0830 21:32:20.668245  975141 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 21:32:20.668265  975141 cache_images.go:84] Images are preloaded, skipping loading
	I0830 21:32:20.668374  975141 ssh_runner.go:195] Run: crio config
	I0830 21:32:20.720323  975141 command_runner.go:130] ! time="2023-08-30 21:32:20.712413463Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0830 21:32:20.720359  975141 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0830 21:32:20.736965  975141 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0830 21:32:20.736997  975141 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0830 21:32:20.737012  975141 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0830 21:32:20.737017  975141 command_runner.go:130] > #
	I0830 21:32:20.737028  975141 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0830 21:32:20.737038  975141 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0830 21:32:20.737048  975141 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0830 21:32:20.737068  975141 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0830 21:32:20.737077  975141 command_runner.go:130] > # reload'.
	I0830 21:32:20.737087  975141 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0830 21:32:20.737095  975141 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0830 21:32:20.737101  975141 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0830 21:32:20.737108  975141 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0830 21:32:20.737111  975141 command_runner.go:130] > [crio]
	I0830 21:32:20.737117  975141 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0830 21:32:20.737128  975141 command_runner.go:130] > # containers images, in this directory.
	I0830 21:32:20.737136  975141 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0830 21:32:20.737154  975141 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0830 21:32:20.737166  975141 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0830 21:32:20.737180  975141 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0830 21:32:20.737193  975141 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0830 21:32:20.737204  975141 command_runner.go:130] > storage_driver = "overlay"
	I0830 21:32:20.737211  975141 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0830 21:32:20.737219  975141 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0830 21:32:20.737224  975141 command_runner.go:130] > storage_option = [
	I0830 21:32:20.737234  975141 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0830 21:32:20.737252  975141 command_runner.go:130] > ]
	I0830 21:32:20.737271  975141 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0830 21:32:20.737285  975141 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0830 21:32:20.737296  975141 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0830 21:32:20.737306  975141 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0830 21:32:20.737316  975141 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0830 21:32:20.737322  975141 command_runner.go:130] > # always happen on a node reboot
	I0830 21:32:20.737330  975141 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0830 21:32:20.737342  975141 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0830 21:32:20.737356  975141 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0830 21:32:20.737375  975141 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0830 21:32:20.737387  975141 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0830 21:32:20.737400  975141 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0830 21:32:20.737412  975141 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0830 21:32:20.737421  975141 command_runner.go:130] > # internal_wipe = true
	I0830 21:32:20.737431  975141 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0830 21:32:20.737444  975141 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0830 21:32:20.737458  975141 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0830 21:32:20.737473  975141 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0830 21:32:20.737486  975141 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0830 21:32:20.737495  975141 command_runner.go:130] > [crio.api]
	I0830 21:32:20.737504  975141 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0830 21:32:20.737512  975141 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0830 21:32:20.737518  975141 command_runner.go:130] > # IP address on which the stream server will listen.
	I0830 21:32:20.737528  975141 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0830 21:32:20.737543  975141 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0830 21:32:20.737559  975141 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0830 21:32:20.737569  975141 command_runner.go:130] > # stream_port = "0"
	I0830 21:32:20.737578  975141 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0830 21:32:20.737588  975141 command_runner.go:130] > # stream_enable_tls = false
	I0830 21:32:20.737598  975141 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0830 21:32:20.737614  975141 command_runner.go:130] > # stream_idle_timeout = ""
	I0830 21:32:20.737623  975141 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0830 21:32:20.737633  975141 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0830 21:32:20.737639  975141 command_runner.go:130] > # minutes.
	I0830 21:32:20.737644  975141 command_runner.go:130] > # stream_tls_cert = ""
	I0830 21:32:20.737664  975141 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0830 21:32:20.737679  975141 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0830 21:32:20.737686  975141 command_runner.go:130] > # stream_tls_key = ""
	I0830 21:32:20.737698  975141 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0830 21:32:20.737711  975141 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0830 21:32:20.737722  975141 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0830 21:32:20.737732  975141 command_runner.go:130] > # stream_tls_ca = ""
	I0830 21:32:20.737747  975141 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 21:32:20.737758  975141 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0830 21:32:20.737773  975141 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 21:32:20.737782  975141 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0830 21:32:20.737806  975141 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0830 21:32:20.737815  975141 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0830 21:32:20.737819  975141 command_runner.go:130] > [crio.runtime]
	I0830 21:32:20.737824  975141 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0830 21:32:20.737831  975141 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0830 21:32:20.737835  975141 command_runner.go:130] > # "nofile=1024:2048"
	I0830 21:32:20.737841  975141 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0830 21:32:20.737849  975141 command_runner.go:130] > # default_ulimits = [
	I0830 21:32:20.737853  975141 command_runner.go:130] > # ]
	I0830 21:32:20.737859  975141 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0830 21:32:20.737865  975141 command_runner.go:130] > # no_pivot = false
	I0830 21:32:20.737871  975141 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0830 21:32:20.737879  975141 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0830 21:32:20.737884  975141 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0830 21:32:20.737892  975141 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0830 21:32:20.737897  975141 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0830 21:32:20.737906  975141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 21:32:20.737912  975141 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0830 21:32:20.737917  975141 command_runner.go:130] > # Cgroup setting for conmon
	I0830 21:32:20.737924  975141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0830 21:32:20.737930  975141 command_runner.go:130] > conmon_cgroup = "pod"
	I0830 21:32:20.737936  975141 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0830 21:32:20.737943  975141 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0830 21:32:20.737950  975141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 21:32:20.737956  975141 command_runner.go:130] > conmon_env = [
	I0830 21:32:20.737964  975141 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0830 21:32:20.737970  975141 command_runner.go:130] > ]
	I0830 21:32:20.737975  975141 command_runner.go:130] > # Additional environment variables to set for all the
	I0830 21:32:20.737984  975141 command_runner.go:130] > # containers. These are overridden if set in the
	I0830 21:32:20.737992  975141 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0830 21:32:20.737999  975141 command_runner.go:130] > # default_env = [
	I0830 21:32:20.738003  975141 command_runner.go:130] > # ]
	I0830 21:32:20.738008  975141 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0830 21:32:20.738014  975141 command_runner.go:130] > # selinux = false
	I0830 21:32:20.738021  975141 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0830 21:32:20.738029  975141 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0830 21:32:20.738036  975141 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0830 21:32:20.738040  975141 command_runner.go:130] > # seccomp_profile = ""
	I0830 21:32:20.738048  975141 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0830 21:32:20.738056  975141 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0830 21:32:20.738062  975141 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0830 21:32:20.738068  975141 command_runner.go:130] > # which might increase security.
	I0830 21:32:20.738073  975141 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0830 21:32:20.738083  975141 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0830 21:32:20.738091  975141 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0830 21:32:20.738096  975141 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0830 21:32:20.738105  975141 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0830 21:32:20.738112  975141 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:32:20.738116  975141 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0830 21:32:20.738124  975141 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0830 21:32:20.738128  975141 command_runner.go:130] > # the cgroup blockio controller.
	I0830 21:32:20.738155  975141 command_runner.go:130] > # blockio_config_file = ""
	I0830 21:32:20.738164  975141 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0830 21:32:20.738170  975141 command_runner.go:130] > # irqbalance daemon.
	I0830 21:32:20.738175  975141 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0830 21:32:20.738183  975141 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0830 21:32:20.738191  975141 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:32:20.738195  975141 command_runner.go:130] > # rdt_config_file = ""
	I0830 21:32:20.738203  975141 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0830 21:32:20.738207  975141 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0830 21:32:20.738213  975141 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0830 21:32:20.738223  975141 command_runner.go:130] > # separate_pull_cgroup = ""
	I0830 21:32:20.738231  975141 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0830 21:32:20.738239  975141 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0830 21:32:20.738245  975141 command_runner.go:130] > # will be added.
	I0830 21:32:20.738249  975141 command_runner.go:130] > # default_capabilities = [
	I0830 21:32:20.738255  975141 command_runner.go:130] > # 	"CHOWN",
	I0830 21:32:20.738259  975141 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0830 21:32:20.738265  975141 command_runner.go:130] > # 	"FSETID",
	I0830 21:32:20.738269  975141 command_runner.go:130] > # 	"FOWNER",
	I0830 21:32:20.738275  975141 command_runner.go:130] > # 	"SETGID",
	I0830 21:32:20.738279  975141 command_runner.go:130] > # 	"SETUID",
	I0830 21:32:20.738284  975141 command_runner.go:130] > # 	"SETPCAP",
	I0830 21:32:20.738289  975141 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0830 21:32:20.738294  975141 command_runner.go:130] > # 	"KILL",
	I0830 21:32:20.738298  975141 command_runner.go:130] > # ]
	I0830 21:32:20.738306  975141 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0830 21:32:20.738314  975141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 21:32:20.738320  975141 command_runner.go:130] > # default_sysctls = [
	I0830 21:32:20.738326  975141 command_runner.go:130] > # ]
	I0830 21:32:20.738332  975141 command_runner.go:130] > # List of devices on the host that a
	I0830 21:32:20.738338  975141 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0830 21:32:20.738347  975141 command_runner.go:130] > # allowed_devices = [
	I0830 21:32:20.738353  975141 command_runner.go:130] > # 	"/dev/fuse",
	I0830 21:32:20.738357  975141 command_runner.go:130] > # ]
	I0830 21:32:20.738364  975141 command_runner.go:130] > # List of additional devices. specified as
	I0830 21:32:20.738371  975141 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0830 21:32:20.738378  975141 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0830 21:32:20.738458  975141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 21:32:20.738471  975141 command_runner.go:130] > # additional_devices = [
	I0830 21:32:20.738474  975141 command_runner.go:130] > # ]
	I0830 21:32:20.738479  975141 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0830 21:32:20.738483  975141 command_runner.go:130] > # cdi_spec_dirs = [
	I0830 21:32:20.738487  975141 command_runner.go:130] > # 	"/etc/cdi",
	I0830 21:32:20.738492  975141 command_runner.go:130] > # 	"/var/run/cdi",
	I0830 21:32:20.738495  975141 command_runner.go:130] > # ]
	I0830 21:32:20.738505  975141 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0830 21:32:20.738517  975141 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0830 21:32:20.738523  975141 command_runner.go:130] > # Defaults to false.
	I0830 21:32:20.738529  975141 command_runner.go:130] > # device_ownership_from_security_context = false
	I0830 21:32:20.738537  975141 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0830 21:32:20.738543  975141 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0830 21:32:20.738548  975141 command_runner.go:130] > # hooks_dir = [
	I0830 21:32:20.738553  975141 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0830 21:32:20.738560  975141 command_runner.go:130] > # ]
	I0830 21:32:20.738569  975141 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0830 21:32:20.738578  975141 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0830 21:32:20.738586  975141 command_runner.go:130] > # its default mounts from the following two files:
	I0830 21:32:20.738592  975141 command_runner.go:130] > #
	I0830 21:32:20.738597  975141 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0830 21:32:20.738606  975141 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0830 21:32:20.738613  975141 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0830 21:32:20.738617  975141 command_runner.go:130] > #
	I0830 21:32:20.738623  975141 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0830 21:32:20.738632  975141 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0830 21:32:20.738643  975141 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0830 21:32:20.738650  975141 command_runner.go:130] > #      only add mounts it finds in this file.
	I0830 21:32:20.738653  975141 command_runner.go:130] > #
	I0830 21:32:20.738668  975141 command_runner.go:130] > # default_mounts_file = ""
	I0830 21:32:20.738679  975141 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0830 21:32:20.738693  975141 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0830 21:32:20.738702  975141 command_runner.go:130] > pids_limit = 1024
	I0830 21:32:20.738714  975141 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0830 21:32:20.738727  975141 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0830 21:32:20.738740  975141 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0830 21:32:20.738756  975141 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0830 21:32:20.738765  975141 command_runner.go:130] > # log_size_max = -1
	I0830 21:32:20.738779  975141 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0830 21:32:20.738789  975141 command_runner.go:130] > # log_to_journald = false
	I0830 21:32:20.738802  975141 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0830 21:32:20.738813  975141 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0830 21:32:20.738821  975141 command_runner.go:130] > # Path to directory for container attach sockets.
	I0830 21:32:20.738832  975141 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0830 21:32:20.738845  975141 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0830 21:32:20.738851  975141 command_runner.go:130] > # bind_mount_prefix = ""
	I0830 21:32:20.738857  975141 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0830 21:32:20.738863  975141 command_runner.go:130] > # read_only = false
	I0830 21:32:20.738870  975141 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0830 21:32:20.738878  975141 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0830 21:32:20.738883  975141 command_runner.go:130] > # live configuration reload.
	I0830 21:32:20.738889  975141 command_runner.go:130] > # log_level = "info"
	I0830 21:32:20.738894  975141 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0830 21:32:20.738902  975141 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:32:20.738905  975141 command_runner.go:130] > # log_filter = ""
	I0830 21:32:20.738912  975141 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0830 21:32:20.738920  975141 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0830 21:32:20.738926  975141 command_runner.go:130] > # separated by comma.
	I0830 21:32:20.738930  975141 command_runner.go:130] > # uid_mappings = ""
	I0830 21:32:20.738938  975141 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0830 21:32:20.738947  975141 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0830 21:32:20.738954  975141 command_runner.go:130] > # separated by comma.
	I0830 21:32:20.738960  975141 command_runner.go:130] > # gid_mappings = ""
	I0830 21:32:20.738969  975141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0830 21:32:20.738975  975141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 21:32:20.738983  975141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 21:32:20.738988  975141 command_runner.go:130] > # minimum_mappable_uid = -1
	I0830 21:32:20.738997  975141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0830 21:32:20.739006  975141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 21:32:20.739011  975141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 21:32:20.739018  975141 command_runner.go:130] > # minimum_mappable_gid = -1
	I0830 21:32:20.739024  975141 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0830 21:32:20.739032  975141 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0830 21:32:20.739040  975141 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0830 21:32:20.739044  975141 command_runner.go:130] > # ctr_stop_timeout = 30
	I0830 21:32:20.739051  975141 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0830 21:32:20.739057  975141 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0830 21:32:20.739064  975141 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0830 21:32:20.739069  975141 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0830 21:32:20.739079  975141 command_runner.go:130] > drop_infra_ctr = false
	I0830 21:32:20.739089  975141 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0830 21:32:20.739097  975141 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0830 21:32:20.739106  975141 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0830 21:32:20.739113  975141 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0830 21:32:20.739119  975141 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0830 21:32:20.739126  975141 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0830 21:32:20.739130  975141 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0830 21:32:20.739139  975141 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0830 21:32:20.739143  975141 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0830 21:32:20.739151  975141 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0830 21:32:20.739159  975141 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0830 21:32:20.739165  975141 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0830 21:32:20.739171  975141 command_runner.go:130] > # default_runtime = "runc"
	I0830 21:32:20.739176  975141 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0830 21:32:20.739186  975141 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0830 21:32:20.739197  975141 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0830 21:32:20.739204  975141 command_runner.go:130] > # creation as a file is not desired either.
	I0830 21:32:20.739212  975141 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0830 21:32:20.739221  975141 command_runner.go:130] > # the hostname is being managed dynamically.
	I0830 21:32:20.739226  975141 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0830 21:32:20.739231  975141 command_runner.go:130] > # ]
	I0830 21:32:20.739237  975141 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0830 21:32:20.739246  975141 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0830 21:32:20.739252  975141 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0830 21:32:20.739260  975141 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0830 21:32:20.739264  975141 command_runner.go:130] > #
	I0830 21:32:20.739269  975141 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0830 21:32:20.739276  975141 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0830 21:32:20.739280  975141 command_runner.go:130] > #  runtime_type = "oci"
	I0830 21:32:20.739287  975141 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0830 21:32:20.739292  975141 command_runner.go:130] > #  privileged_without_host_devices = false
	I0830 21:32:20.739298  975141 command_runner.go:130] > #  allowed_annotations = []
	I0830 21:32:20.739302  975141 command_runner.go:130] > # Where:
	I0830 21:32:20.739309  975141 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0830 21:32:20.739315  975141 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0830 21:32:20.739323  975141 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0830 21:32:20.739335  975141 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0830 21:32:20.739341  975141 command_runner.go:130] > #   in $PATH.
	I0830 21:32:20.739347  975141 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0830 21:32:20.739354  975141 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0830 21:32:20.739365  975141 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0830 21:32:20.739371  975141 command_runner.go:130] > #   state.
	I0830 21:32:20.739377  975141 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0830 21:32:20.739385  975141 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0830 21:32:20.739391  975141 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0830 21:32:20.739399  975141 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0830 21:32:20.739405  975141 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0830 21:32:20.739414  975141 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0830 21:32:20.739421  975141 command_runner.go:130] > #   The currently recognized values are:
	I0830 21:32:20.739427  975141 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0830 21:32:20.739436  975141 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0830 21:32:20.739446  975141 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0830 21:32:20.739454  975141 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0830 21:32:20.739462  975141 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0830 21:32:20.739472  975141 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0830 21:32:20.739478  975141 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0830 21:32:20.739487  975141 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0830 21:32:20.739494  975141 command_runner.go:130] > #   should be moved to the container's cgroup
	I0830 21:32:20.739498  975141 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0830 21:32:20.739504  975141 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0830 21:32:20.739508  975141 command_runner.go:130] > runtime_type = "oci"
	I0830 21:32:20.739515  975141 command_runner.go:130] > runtime_root = "/run/runc"
	I0830 21:32:20.739519  975141 command_runner.go:130] > runtime_config_path = ""
	I0830 21:32:20.739525  975141 command_runner.go:130] > monitor_path = ""
	I0830 21:32:20.739529  975141 command_runner.go:130] > monitor_cgroup = ""
	I0830 21:32:20.739535  975141 command_runner.go:130] > monitor_exec_cgroup = ""
	I0830 21:32:20.739543  975141 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0830 21:32:20.739550  975141 command_runner.go:130] > # running containers
	I0830 21:32:20.739554  975141 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0830 21:32:20.739562  975141 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0830 21:32:20.739612  975141 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0830 21:32:20.739624  975141 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0830 21:32:20.739632  975141 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0830 21:32:20.739636  975141 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0830 21:32:20.739641  975141 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0830 21:32:20.739645  975141 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0830 21:32:20.739653  975141 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0830 21:32:20.739657  975141 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0830 21:32:20.739670  975141 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0830 21:32:20.739677  975141 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0830 21:32:20.739684  975141 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0830 21:32:20.739693  975141 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0830 21:32:20.739703  975141 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0830 21:32:20.739711  975141 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0830 21:32:20.739722  975141 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0830 21:32:20.739729  975141 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0830 21:32:20.739739  975141 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0830 21:32:20.739748  975141 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0830 21:32:20.739753  975141 command_runner.go:130] > # Example:
	I0830 21:32:20.739758  975141 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0830 21:32:20.739767  975141 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0830 21:32:20.739792  975141 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0830 21:32:20.739801  975141 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0830 21:32:20.739805  975141 command_runner.go:130] > # cpuset = 0
	I0830 21:32:20.739812  975141 command_runner.go:130] > # cpushares = "0-1"
	I0830 21:32:20.739815  975141 command_runner.go:130] > # Where:
	I0830 21:32:20.739823  975141 command_runner.go:130] > # The workload name is workload-type.
	I0830 21:32:20.739829  975141 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0830 21:32:20.739837  975141 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0830 21:32:20.739844  975141 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0830 21:32:20.739853  975141 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0830 21:32:20.739861  975141 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0830 21:32:20.739866  975141 command_runner.go:130] > # 
	I0830 21:32:20.739872  975141 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0830 21:32:20.739878  975141 command_runner.go:130] > #
	I0830 21:32:20.739883  975141 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0830 21:32:20.739891  975141 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0830 21:32:20.739898  975141 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0830 21:32:20.739909  975141 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0830 21:32:20.739917  975141 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0830 21:32:20.739921  975141 command_runner.go:130] > [crio.image]
	I0830 21:32:20.739927  975141 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0830 21:32:20.739933  975141 command_runner.go:130] > # default_transport = "docker://"
	I0830 21:32:20.739939  975141 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0830 21:32:20.739948  975141 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0830 21:32:20.739954  975141 command_runner.go:130] > # global_auth_file = ""
	I0830 21:32:20.739961  975141 command_runner.go:130] > # The image used to instantiate infra containers.
	I0830 21:32:20.739969  975141 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:32:20.739976  975141 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0830 21:32:20.739983  975141 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0830 21:32:20.739990  975141 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0830 21:32:20.739995  975141 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:32:20.740001  975141 command_runner.go:130] > # pause_image_auth_file = ""
	I0830 21:32:20.740007  975141 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0830 21:32:20.740017  975141 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0830 21:32:20.740023  975141 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0830 21:32:20.740031  975141 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0830 21:32:20.740035  975141 command_runner.go:130] > # pause_command = "/pause"
	I0830 21:32:20.740040  975141 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0830 21:32:20.740046  975141 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0830 21:32:20.740051  975141 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0830 21:32:20.740057  975141 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0830 21:32:20.740062  975141 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0830 21:32:20.740065  975141 command_runner.go:130] > # signature_policy = ""
	I0830 21:32:20.740070  975141 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0830 21:32:20.740076  975141 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0830 21:32:20.740080  975141 command_runner.go:130] > # changing them here.
	I0830 21:32:20.740083  975141 command_runner.go:130] > # insecure_registries = [
	I0830 21:32:20.740086  975141 command_runner.go:130] > # ]
	I0830 21:32:20.740095  975141 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0830 21:32:20.740100  975141 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0830 21:32:20.740107  975141 command_runner.go:130] > # image_volumes = "mkdir"
	I0830 21:32:20.740112  975141 command_runner.go:130] > # Temporary directory to use for storing big files
	I0830 21:32:20.740119  975141 command_runner.go:130] > # big_files_temporary_dir = ""
	I0830 21:32:20.740128  975141 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0830 21:32:20.740134  975141 command_runner.go:130] > # CNI plugins.
	I0830 21:32:20.740138  975141 command_runner.go:130] > [crio.network]
	I0830 21:32:20.740145  975141 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0830 21:32:20.740151  975141 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0830 21:32:20.740156  975141 command_runner.go:130] > # cni_default_network = ""
	I0830 21:32:20.740161  975141 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0830 21:32:20.740168  975141 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0830 21:32:20.740173  975141 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0830 21:32:20.740179  975141 command_runner.go:130] > # plugin_dirs = [
	I0830 21:32:20.740183  975141 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0830 21:32:20.740188  975141 command_runner.go:130] > # ]
	I0830 21:32:20.740194  975141 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0830 21:32:20.740200  975141 command_runner.go:130] > [crio.metrics]
	I0830 21:32:20.740205  975141 command_runner.go:130] > # Globally enable or disable metrics support.
	I0830 21:32:20.740211  975141 command_runner.go:130] > enable_metrics = true
	I0830 21:32:20.740215  975141 command_runner.go:130] > # Specify enabled metrics collectors.
	I0830 21:32:20.740222  975141 command_runner.go:130] > # Per default all metrics are enabled.
	I0830 21:32:20.740232  975141 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0830 21:32:20.740242  975141 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0830 21:32:20.740250  975141 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0830 21:32:20.740255  975141 command_runner.go:130] > # metrics_collectors = [
	I0830 21:32:20.740259  975141 command_runner.go:130] > # 	"operations",
	I0830 21:32:20.740269  975141 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0830 21:32:20.740275  975141 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0830 21:32:20.740279  975141 command_runner.go:130] > # 	"operations_errors",
	I0830 21:32:20.740285  975141 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0830 21:32:20.740290  975141 command_runner.go:130] > # 	"image_pulls_by_name",
	I0830 21:32:20.740296  975141 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0830 21:32:20.740300  975141 command_runner.go:130] > # 	"image_pulls_failures",
	I0830 21:32:20.740307  975141 command_runner.go:130] > # 	"image_pulls_successes",
	I0830 21:32:20.740311  975141 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0830 21:32:20.740318  975141 command_runner.go:130] > # 	"image_layer_reuse",
	I0830 21:32:20.740322  975141 command_runner.go:130] > # 	"containers_oom_total",
	I0830 21:32:20.740328  975141 command_runner.go:130] > # 	"containers_oom",
	I0830 21:32:20.740332  975141 command_runner.go:130] > # 	"processes_defunct",
	I0830 21:32:20.740341  975141 command_runner.go:130] > # 	"operations_total",
	I0830 21:32:20.740345  975141 command_runner.go:130] > # 	"operations_latency_seconds",
	I0830 21:32:20.740352  975141 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0830 21:32:20.740356  975141 command_runner.go:130] > # 	"operations_errors_total",
	I0830 21:32:20.740361  975141 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0830 21:32:20.740366  975141 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0830 21:32:20.740372  975141 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0830 21:32:20.740376  975141 command_runner.go:130] > # 	"image_pulls_success_total",
	I0830 21:32:20.740383  975141 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0830 21:32:20.740387  975141 command_runner.go:130] > # 	"containers_oom_count_total",
	I0830 21:32:20.740392  975141 command_runner.go:130] > # ]
	I0830 21:32:20.740397  975141 command_runner.go:130] > # The port on which the metrics server will listen.
	I0830 21:32:20.740403  975141 command_runner.go:130] > # metrics_port = 9090
	I0830 21:32:20.740408  975141 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0830 21:32:20.740414  975141 command_runner.go:130] > # metrics_socket = ""
	I0830 21:32:20.740419  975141 command_runner.go:130] > # The certificate for the secure metrics server.
	I0830 21:32:20.740427  975141 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0830 21:32:20.740433  975141 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0830 21:32:20.740442  975141 command_runner.go:130] > # certificate on any modification event.
	I0830 21:32:20.740448  975141 command_runner.go:130] > # metrics_cert = ""
	I0830 21:32:20.740454  975141 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0830 21:32:20.740461  975141 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0830 21:32:20.740464  975141 command_runner.go:130] > # metrics_key = ""
	I0830 21:32:20.740470  975141 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0830 21:32:20.740477  975141 command_runner.go:130] > [crio.tracing]
	I0830 21:32:20.740482  975141 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0830 21:32:20.740488  975141 command_runner.go:130] > # enable_tracing = false
	I0830 21:32:20.740493  975141 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0830 21:32:20.740502  975141 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0830 21:32:20.740510  975141 command_runner.go:130] > # Number of samples to collect per million spans.
	I0830 21:32:20.740514  975141 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0830 21:32:20.740520  975141 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0830 21:32:20.740526  975141 command_runner.go:130] > [crio.stats]
	I0830 21:32:20.740532  975141 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0830 21:32:20.740539  975141 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0830 21:32:20.740545  975141 command_runner.go:130] > # stats_collection_period = 0
	I0830 21:32:20.740635  975141 cni.go:84] Creating CNI manager for ""
	I0830 21:32:20.740649  975141 cni.go:136] 1 nodes found, recommending kindnet
	I0830 21:32:20.740676  975141 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:32:20.740697  975141 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.20 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-752665 NodeName:multinode-752665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 21:32:20.740830  975141 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-752665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.20
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:32:20.740909  975141 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-752665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 21:32:20.740969  975141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 21:32:20.750223  975141 command_runner.go:130] > kubeadm
	I0830 21:32:20.750241  975141 command_runner.go:130] > kubectl
	I0830 21:32:20.750245  975141 command_runner.go:130] > kubelet
	I0830 21:32:20.750268  975141 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 21:32:20.750332  975141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 21:32:20.758550  975141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0830 21:32:20.774923  975141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 21:32:20.791427  975141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0830 21:32:20.807676  975141 ssh_runner.go:195] Run: grep 192.168.39.20	control-plane.minikube.internal$ /etc/hosts
	I0830 21:32:20.811329  975141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:32:20.824216  975141 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665 for IP: 192.168.39.20
	I0830 21:32:20.824251  975141 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:32:20.824457  975141 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 21:32:20.824502  975141 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 21:32:20.824564  975141 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key
	I0830 21:32:20.824584  975141 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt with IP's: []
	I0830 21:32:21.179021  975141 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt ...
	I0830 21:32:21.179056  975141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt: {Name:mkb07f461405d6fd7371792ea27eb47ffee825b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:32:21.179236  975141 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key ...
	I0830 21:32:21.179247  975141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key: {Name:mk2fa40a83000220b1e495a3737cc72679a3f094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:32:21.179329  975141 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.key.2e41fa34
	I0830 21:32:21.179347  975141 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.crt.2e41fa34 with IP's: [192.168.39.20 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 21:32:21.302940  975141 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.crt.2e41fa34 ...
	I0830 21:32:21.302977  975141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.crt.2e41fa34: {Name:mk2acb2609c2e840ae59beb919b2ef709975be7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:32:21.303143  975141 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.key.2e41fa34 ...
	I0830 21:32:21.303153  975141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.key.2e41fa34: {Name:mkda365963b0b4879f71c2e68672e95bc055c87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:32:21.303220  975141 certs.go:337] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.crt.2e41fa34 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.crt
	I0830 21:32:21.303305  975141 certs.go:341] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.key.2e41fa34 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.key
	I0830 21:32:21.303365  975141 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.key
	I0830 21:32:21.303380  975141 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.crt with IP's: []
	I0830 21:32:21.687882  975141 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.crt ...
	I0830 21:32:21.687925  975141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.crt: {Name:mk7d82d87f22afc7aee96b344f5c0882c9d5033f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:32:21.688123  975141 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.key ...
	I0830 21:32:21.688135  975141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.key: {Name:mk02a6d1c23b1234881452cef62efa26efd22c9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:32:21.688244  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0830 21:32:21.688265  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0830 21:32:21.688277  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0830 21:32:21.688298  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0830 21:32:21.688313  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 21:32:21.688332  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 21:32:21.688347  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 21:32:21.688361  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 21:32:21.688426  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 21:32:21.688474  975141 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 21:32:21.688488  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 21:32:21.688519  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 21:32:21.688546  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:32:21.688572  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 21:32:21.688617  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:32:21.688673  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem -> /usr/share/ca-certificates/962621.pem
	I0830 21:32:21.688706  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /usr/share/ca-certificates/9626212.pem
	I0830 21:32:21.688721  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:32:21.689295  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 21:32:21.720076  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 21:32:21.744940  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 21:32:21.768914  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 21:32:21.793223  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:32:21.816311  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:32:21.840253  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:32:21.863953  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 21:32:21.888614  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 21:32:21.911867  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 21:32:21.934677  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:32:21.960937  975141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 21:32:21.977197  975141 ssh_runner.go:195] Run: openssl version
	I0830 21:32:21.982752  975141 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0830 21:32:21.982855  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:32:21.992502  975141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:32:21.997096  975141 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:32:21.997355  975141 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:32:21.997414  975141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:32:22.003215  975141 command_runner.go:130] > b5213941
	I0830 21:32:22.003305  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:32:22.013081  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 21:32:22.022663  975141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 21:32:22.027082  975141 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:32:22.027179  975141 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:32:22.027242  975141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 21:32:22.032520  975141 command_runner.go:130] > 51391683
	I0830 21:32:22.032774  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 21:32:22.041979  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 21:32:22.051566  975141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 21:32:22.056128  975141 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:32:22.056158  975141 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:32:22.056203  975141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 21:32:22.061396  975141 command_runner.go:130] > 3ec20f2e
	I0830 21:32:22.061660  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 21:32:22.071031  975141 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:32:22.075467  975141 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:32:22.075510  975141 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:32:22.075562  975141 kubeadm.go:404] StartCluster: {Name:multinode-752665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:32:22.075639  975141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 21:32:22.075684  975141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:32:22.106290  975141 cri.go:89] found id: ""
	I0830 21:32:22.106376  975141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 21:32:22.115406  975141 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0830 21:32:22.115431  975141 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0830 21:32:22.115443  975141 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0830 21:32:22.115559  975141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 21:32:22.124464  975141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 21:32:22.132713  975141 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0830 21:32:22.132737  975141 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0830 21:32:22.132749  975141 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0830 21:32:22.132763  975141 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 21:32:22.132844  975141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 21:32:22.132901  975141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 21:32:22.243715  975141 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 21:32:22.243759  975141 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
	I0830 21:32:22.243904  975141 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 21:32:22.243921  975141 command_runner.go:130] > [preflight] Running pre-flight checks
	I0830 21:32:22.482171  975141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 21:32:22.482204  975141 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 21:32:22.482336  975141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 21:32:22.482348  975141 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 21:32:22.482505  975141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 21:32:22.482529  975141 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 21:32:22.657561  975141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 21:32:22.659780  975141 out.go:204]   - Generating certificates and keys ...
	I0830 21:32:22.657636  975141 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 21:32:22.660899  975141 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 21:32:22.660918  975141 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0830 21:32:22.660989  975141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 21:32:22.661005  975141 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0830 21:32:22.769644  975141 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 21:32:22.769687  975141 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 21:32:22.880454  975141 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 21:32:22.880490  975141 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0830 21:32:23.173346  975141 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 21:32:23.173386  975141 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0830 21:32:23.710174  975141 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 21:32:23.710209  975141 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0830 21:32:23.938043  975141 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 21:32:23.938080  975141 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0830 21:32:23.938578  975141 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-752665] and IPs [192.168.39.20 127.0.0.1 ::1]
	I0830 21:32:23.938601  975141 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-752665] and IPs [192.168.39.20 127.0.0.1 ::1]
	I0830 21:32:24.127109  975141 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 21:32:24.127154  975141 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0830 21:32:24.127584  975141 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-752665] and IPs [192.168.39.20 127.0.0.1 ::1]
	I0830 21:32:24.127606  975141 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-752665] and IPs [192.168.39.20 127.0.0.1 ::1]
	I0830 21:32:24.311100  975141 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 21:32:24.311138  975141 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 21:32:24.532838  975141 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 21:32:24.532874  975141 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 21:32:24.846293  975141 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 21:32:24.846329  975141 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0830 21:32:24.846530  975141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 21:32:24.846553  975141 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 21:32:24.949876  975141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 21:32:24.949914  975141 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 21:32:25.086538  975141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 21:32:25.086583  975141 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 21:32:25.313736  975141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 21:32:25.313764  975141 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 21:32:25.543406  975141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 21:32:25.543437  975141 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 21:32:25.544057  975141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 21:32:25.544072  975141 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 21:32:25.548166  975141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 21:32:25.550523  975141 out.go:204]   - Booting up control plane ...
	I0830 21:32:25.548273  975141 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 21:32:25.550696  975141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 21:32:25.550716  975141 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 21:32:25.550820  975141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 21:32:25.550831  975141 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 21:32:25.551720  975141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 21:32:25.551736  975141 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 21:32:25.567703  975141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 21:32:25.567738  975141 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 21:32:25.568622  975141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 21:32:25.568653  975141 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 21:32:25.568752  975141 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 21:32:25.568764  975141 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0830 21:32:25.689465  975141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 21:32:25.689494  975141 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 21:32:33.692043  975141 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003808 seconds
	I0830 21:32:33.692073  975141 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.003808 seconds
	I0830 21:32:33.692214  975141 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 21:32:33.692228  975141 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 21:32:33.708640  975141 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 21:32:33.708672  975141 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 21:32:34.254525  975141 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 21:32:34.254566  975141 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0830 21:32:34.254789  975141 kubeadm.go:322] [mark-control-plane] Marking the node multinode-752665 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 21:32:34.254806  975141 command_runner.go:130] > [mark-control-plane] Marking the node multinode-752665 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 21:32:34.774130  975141 kubeadm.go:322] [bootstrap-token] Using token: mxp5vx.4v6ejmxk14f99c3u
	I0830 21:32:34.775735  975141 out.go:204]   - Configuring RBAC rules ...
	I0830 21:32:34.774204  975141 command_runner.go:130] > [bootstrap-token] Using token: mxp5vx.4v6ejmxk14f99c3u
	I0830 21:32:34.775895  975141 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 21:32:34.775911  975141 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 21:32:34.782079  975141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 21:32:34.782097  975141 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 21:32:34.795965  975141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 21:32:34.795988  975141 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 21:32:34.799939  975141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 21:32:34.799956  975141 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 21:32:34.804018  975141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 21:32:34.804041  975141 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 21:32:34.808422  975141 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 21:32:34.808451  975141 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 21:32:34.834517  975141 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 21:32:34.834543  975141 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 21:32:35.094048  975141 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 21:32:35.094082  975141 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0830 21:32:35.189710  975141 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 21:32:35.189736  975141 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0830 21:32:35.190025  975141 kubeadm.go:322] 
	I0830 21:32:35.190122  975141 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 21:32:35.190156  975141 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0830 21:32:35.190183  975141 kubeadm.go:322] 
	I0830 21:32:35.190282  975141 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 21:32:35.190294  975141 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0830 21:32:35.190300  975141 kubeadm.go:322] 
	I0830 21:32:35.190333  975141 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 21:32:35.190349  975141 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0830 21:32:35.190428  975141 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 21:32:35.190437  975141 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 21:32:35.190496  975141 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 21:32:35.190504  975141 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 21:32:35.190507  975141 kubeadm.go:322] 
	I0830 21:32:35.190572  975141 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 21:32:35.190583  975141 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0830 21:32:35.190587  975141 kubeadm.go:322] 
	I0830 21:32:35.190665  975141 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 21:32:35.190679  975141 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 21:32:35.190689  975141 kubeadm.go:322] 
	I0830 21:32:35.190765  975141 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 21:32:35.190778  975141 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0830 21:32:35.190856  975141 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 21:32:35.190863  975141 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 21:32:35.190950  975141 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 21:32:35.190962  975141 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 21:32:35.190968  975141 kubeadm.go:322] 
	I0830 21:32:35.191091  975141 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 21:32:35.191105  975141 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0830 21:32:35.191219  975141 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 21:32:35.191234  975141 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0830 21:32:35.191241  975141 kubeadm.go:322] 
	I0830 21:32:35.191360  975141 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token mxp5vx.4v6ejmxk14f99c3u \
	I0830 21:32:35.191372  975141 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token mxp5vx.4v6ejmxk14f99c3u \
	I0830 21:32:35.191464  975141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 21:32:35.191471  975141 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 21:32:35.191501  975141 kubeadm.go:322] 	--control-plane 
	I0830 21:32:35.191511  975141 command_runner.go:130] > 	--control-plane 
	I0830 21:32:35.191520  975141 kubeadm.go:322] 
	I0830 21:32:35.191652  975141 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 21:32:35.191663  975141 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0830 21:32:35.191667  975141 kubeadm.go:322] 
	I0830 21:32:35.191792  975141 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token mxp5vx.4v6ejmxk14f99c3u \
	I0830 21:32:35.191804  975141 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mxp5vx.4v6ejmxk14f99c3u \
	I0830 21:32:35.191948  975141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 21:32:35.191963  975141 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 21:32:35.192646  975141 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 21:32:35.192671  975141 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 21:32:35.192703  975141 cni.go:84] Creating CNI manager for ""
	I0830 21:32:35.192723  975141 cni.go:136] 1 nodes found, recommending kindnet
	I0830 21:32:35.194934  975141 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0830 21:32:35.196469  975141 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 21:32:35.202917  975141 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0830 21:32:35.202942  975141 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0830 21:32:35.202951  975141 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0830 21:32:35.202961  975141 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 21:32:35.202970  975141 command_runner.go:130] > Access: 2023-08-30 21:32:00.901149145 +0000
	I0830 21:32:35.202979  975141 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0830 21:32:35.202986  975141 command_runner.go:130] > Change: 2023-08-30 21:31:59.074149145 +0000
	I0830 21:32:35.202998  975141 command_runner.go:130] >  Birth: -
	I0830 21:32:35.203910  975141 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 21:32:35.203932  975141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 21:32:35.234962  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 21:32:36.362336  975141 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0830 21:32:36.369073  975141 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0830 21:32:36.393317  975141 command_runner.go:130] > serviceaccount/kindnet created
	I0830 21:32:36.410564  975141 command_runner.go:130] > daemonset.apps/kindnet created
	I0830 21:32:36.413601  975141 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.178607112s)
	I0830 21:32:36.413652  975141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 21:32:36.413757  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:36.413759  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=multinode-752665 minikube.k8s.io/updated_at=2023_08_30T21_32_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:36.462909  975141 command_runner.go:130] > -16
	I0830 21:32:36.462980  975141 ops.go:34] apiserver oom_adj: -16
	I0830 21:32:36.594825  975141 command_runner.go:130] > node/multinode-752665 labeled
	I0830 21:32:36.614794  975141 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0830 21:32:36.616994  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:36.715035  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:36.715229  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:36.808098  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:37.309044  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:37.409426  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:37.808819  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:37.893519  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:38.309188  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:38.402773  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:38.808378  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:38.893319  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:39.308396  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:39.406745  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:39.808936  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:39.894683  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:40.308446  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:40.395066  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:40.808818  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:40.893441  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:41.309077  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:41.423521  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:41.809124  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:41.905131  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:42.308744  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:42.419069  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:42.808491  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:42.897554  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:43.309255  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:43.397775  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:43.808466  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:43.898719  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:44.309026  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:44.393035  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:44.808518  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:44.888906  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:45.309187  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:45.402817  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:45.809038  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:45.913659  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:46.309035  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:46.415695  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:46.809412  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:46.911720  975141 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 21:32:47.309008  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:32:47.447906  975141 command_runner.go:130] > NAME      SECRETS   AGE
	I0830 21:32:47.447934  975141 command_runner.go:130] > default   0         0s
	I0830 21:32:47.447960  975141 kubeadm.go:1081] duration metric: took 11.03427944s to wait for elevateKubeSystemPrivileges.
	I0830 21:32:47.447982  975141 kubeadm.go:406] StartCluster complete in 25.372423889s
	I0830 21:32:47.448012  975141 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:32:47.448121  975141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:32:47.449164  975141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:32:47.449423  975141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 21:32:47.449488  975141 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 21:32:47.449639  975141 addons.go:69] Setting storage-provisioner=true in profile "multinode-752665"
	I0830 21:32:47.449663  975141 addons.go:231] Setting addon storage-provisioner=true in "multinode-752665"
	I0830 21:32:47.449696  975141 addons.go:69] Setting default-storageclass=true in profile "multinode-752665"
	I0830 21:32:47.449701  975141 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:32:47.449735  975141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-752665"
	I0830 21:32:47.449742  975141 host.go:66] Checking if "multinode-752665" exists ...
	I0830 21:32:47.449861  975141 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:32:47.450209  975141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:32:47.450240  975141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:32:47.450274  975141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:32:47.450356  975141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:32:47.450285  975141 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:32:47.451311  975141 cert_rotation.go:137] Starting client certificate rotation controller
	I0830 21:32:47.451756  975141 round_trippers.go:463] GET https://192.168.39.20:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 21:32:47.451790  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:47.451828  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:47.451842  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:47.466352  975141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I0830 21:32:47.466597  975141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0830 21:32:47.466805  975141 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:32:47.467052  975141 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:32:47.467355  975141 main.go:141] libmachine: Using API Version  1
	I0830 21:32:47.467377  975141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:32:47.467586  975141 main.go:141] libmachine: Using API Version  1
	I0830 21:32:47.467610  975141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:32:47.467691  975141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:32:47.467880  975141 main.go:141] libmachine: (multinode-752665) Calling .GetState
	I0830 21:32:47.467931  975141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:32:47.468375  975141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:32:47.468406  975141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:32:47.469849  975141 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0830 21:32:47.469869  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:47.469878  975141 round_trippers.go:580]     Content-Length: 291
	I0830 21:32:47.469886  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:47 GMT
	I0830 21:32:47.469896  975141 round_trippers.go:580]     Audit-Id: 8973f390-6325-49b0-a832-580c35449d87
	I0830 21:32:47.469905  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:47.469917  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:47.469927  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:47.469937  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:47.469942  975141 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:32:47.469973  975141 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4cda7228-5995-4a40-902e-7c8e87f8c72e","resourceVersion":"314","creationTimestamp":"2023-08-30T21:32:35Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0830 21:32:47.470320  975141 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:32:47.470543  975141 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4cda7228-5995-4a40-902e-7c8e87f8c72e","resourceVersion":"314","creationTimestamp":"2023-08-30T21:32:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0830 21:32:47.470605  975141 round_trippers.go:463] PUT https://192.168.39.20:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 21:32:47.470616  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:47.470627  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:47.470645  975141 round_trippers.go:473]     Content-Type: application/json
	I0830 21:32:47.470656  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:47.470775  975141 round_trippers.go:463] GET https://192.168.39.20:8443/apis/storage.k8s.io/v1/storageclasses
	I0830 21:32:47.470790  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:47.470800  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:47.470810  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:47.483725  975141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0830 21:32:47.484127  975141 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:32:47.484710  975141 main.go:141] libmachine: Using API Version  1
	I0830 21:32:47.484736  975141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:32:47.485055  975141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:32:47.485271  975141 main.go:141] libmachine: (multinode-752665) Calling .GetState
	I0830 21:32:47.486756  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:32:47.488453  975141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:32:47.487460  975141 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0830 21:32:47.489770  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:47.489787  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:47.489800  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:47.489808  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:47.489816  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:47.489827  975141 round_trippers.go:580]     Content-Length: 291
	I0830 21:32:47.489836  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:47 GMT
	I0830 21:32:47.489847  975141 round_trippers.go:580]     Audit-Id: dcdc765b-9e7d-4e01-a8cf-5a4bbb3821a9
	I0830 21:32:47.489878  975141 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4cda7228-5995-4a40-902e-7c8e87f8c72e","resourceVersion":"322","creationTimestamp":"2023-08-30T21:32:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0830 21:32:47.489897  975141 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:32:47.489916  975141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 21:32:47.489941  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:47.490053  975141 round_trippers.go:463] GET https://192.168.39.20:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 21:32:47.490066  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:47.490073  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:47.490079  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:47.493064  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:47.493506  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:47.493557  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:47.493732  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:47.493944  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:47.494154  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:47.494318  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:32:47.501714  975141 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0830 21:32:47.501737  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:47.501747  975141 round_trippers.go:580]     Audit-Id: fc2a8b76-b8be-4a9c-8f5e-b5e39d2ecb02
	I0830 21:32:47.501754  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:47.501762  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:47.501771  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:47.501780  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:47.501794  975141 round_trippers.go:580]     Content-Length: 109
	I0830 21:32:47.501807  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:47 GMT
	I0830 21:32:47.502816  975141 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"323"},"items":[]}
	I0830 21:32:47.503092  975141 addons.go:231] Setting addon default-storageclass=true in "multinode-752665"
	I0830 21:32:47.503128  975141 host.go:66] Checking if "multinode-752665" exists ...
	I0830 21:32:47.503408  975141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:32:47.503437  975141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:32:47.509900  975141 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0830 21:32:47.509919  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:47.509926  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:47.509933  975141 round_trippers.go:580]     Content-Length: 291
	I0830 21:32:47.509938  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:47 GMT
	I0830 21:32:47.509944  975141 round_trippers.go:580]     Audit-Id: 23f17215-906a-47f7-83bd-378f67eba4f6
	I0830 21:32:47.509952  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:47.509960  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:47.509967  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:47.512945  975141 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4cda7228-5995-4a40-902e-7c8e87f8c72e","resourceVersion":"322","creationTimestamp":"2023-08-30T21:32:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0830 21:32:47.513054  975141 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-752665" context rescaled to 1 replicas
	I0830 21:32:47.513083  975141 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:32:47.514602  975141 out.go:177] * Verifying Kubernetes components...
	I0830 21:32:47.516084  975141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:32:47.519149  975141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0830 21:32:47.519550  975141 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:32:47.520068  975141 main.go:141] libmachine: Using API Version  1
	I0830 21:32:47.520096  975141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:32:47.520417  975141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:32:47.521007  975141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:32:47.521061  975141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:32:47.535727  975141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
	I0830 21:32:47.536210  975141 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:32:47.536708  975141 main.go:141] libmachine: Using API Version  1
	I0830 21:32:47.536735  975141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:32:47.537110  975141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:32:47.537298  975141 main.go:141] libmachine: (multinode-752665) Calling .GetState
	I0830 21:32:47.538914  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:32:47.539167  975141 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 21:32:47.539186  975141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 21:32:47.539210  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:32:47.541619  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:47.541988  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:32:47.542018  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:32:47.542225  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:32:47.542402  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:32:47.542570  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:32:47.542710  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:32:47.694761  975141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:32:47.726462  975141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 21:32:47.756429  975141 command_runner.go:130] > apiVersion: v1
	I0830 21:32:47.756452  975141 command_runner.go:130] > data:
	I0830 21:32:47.756458  975141 command_runner.go:130] >   Corefile: |
	I0830 21:32:47.756463  975141 command_runner.go:130] >     .:53 {
	I0830 21:32:47.756468  975141 command_runner.go:130] >         errors
	I0830 21:32:47.756475  975141 command_runner.go:130] >         health {
	I0830 21:32:47.756482  975141 command_runner.go:130] >            lameduck 5s
	I0830 21:32:47.756488  975141 command_runner.go:130] >         }
	I0830 21:32:47.756493  975141 command_runner.go:130] >         ready
	I0830 21:32:47.756503  975141 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0830 21:32:47.756515  975141 command_runner.go:130] >            pods insecure
	I0830 21:32:47.756525  975141 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0830 21:32:47.756536  975141 command_runner.go:130] >            ttl 30
	I0830 21:32:47.756545  975141 command_runner.go:130] >         }
	I0830 21:32:47.756560  975141 command_runner.go:130] >         prometheus :9153
	I0830 21:32:47.756576  975141 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0830 21:32:47.756589  975141 command_runner.go:130] >            max_concurrent 1000
	I0830 21:32:47.756597  975141 command_runner.go:130] >         }
	I0830 21:32:47.756604  975141 command_runner.go:130] >         cache 30
	I0830 21:32:47.756614  975141 command_runner.go:130] >         loop
	I0830 21:32:47.756621  975141 command_runner.go:130] >         reload
	I0830 21:32:47.756631  975141 command_runner.go:130] >         loadbalance
	I0830 21:32:47.756640  975141 command_runner.go:130] >     }
	I0830 21:32:47.756647  975141 command_runner.go:130] > kind: ConfigMap
	I0830 21:32:47.756656  975141 command_runner.go:130] > metadata:
	I0830 21:32:47.756683  975141 command_runner.go:130] >   creationTimestamp: "2023-08-30T21:32:35Z"
	I0830 21:32:47.756693  975141 command_runner.go:130] >   name: coredns
	I0830 21:32:47.756700  975141 command_runner.go:130] >   namespace: kube-system
	I0830 21:32:47.756707  975141 command_runner.go:130] >   resourceVersion: "231"
	I0830 21:32:47.756718  975141 command_runner.go:130] >   uid: 27acb354-b614-4ab9-9a76-162f2b2cdad9
	I0830 21:32:47.757802  975141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 21:32:47.758071  975141 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:32:47.758315  975141 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:32:47.758575  975141 node_ready.go:35] waiting up to 6m0s for node "multinode-752665" to be "Ready" ...
	I0830 21:32:47.758644  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:47.758651  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:47.758664  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:47.758673  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:47.794158  975141 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0830 21:32:47.794184  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:47.794192  975141 round_trippers.go:580]     Audit-Id: b0a7d0d5-8116-4c1c-914e-605d2d8e1bf4
	I0830 21:32:47.794198  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:47.794203  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:47.794208  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:47.794214  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:47.794220  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:47 GMT
	I0830 21:32:47.826560  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:47.827318  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:47.827337  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:47.827348  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:47.827357  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:47.853899  975141 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0830 21:32:47.853928  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:47.853940  975141 round_trippers.go:580]     Audit-Id: 54a0ec61-b93b-4df1-aa5e-6d03927cafd9
	I0830 21:32:47.853949  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:47.853963  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:47.853972  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:47.853980  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:47.853988  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:47 GMT
	I0830 21:32:47.854137  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:48.355435  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:48.355469  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:48.355482  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:48.355493  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:48.361609  975141 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0830 21:32:48.361636  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:48.361644  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:48 GMT
	I0830 21:32:48.361649  975141 round_trippers.go:580]     Audit-Id: 717d92e2-566b-4f6a-b753-13722061ead4
	I0830 21:32:48.361655  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:48.361663  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:48.361671  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:48.361681  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:48.361802  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:48.741098  975141 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0830 21:32:48.741134  975141 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0830 21:32:48.741144  975141 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0830 21:32:48.741152  975141 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0830 21:32:48.741157  975141 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0830 21:32:48.741161  975141 command_runner.go:130] > pod/storage-provisioner created
	I0830 21:32:48.741177  975141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.046391272s)
	I0830 21:32:48.741214  975141 main.go:141] libmachine: Making call to close driver server
	I0830 21:32:48.741232  975141 main.go:141] libmachine: (multinode-752665) Calling .Close
	I0830 21:32:48.741241  975141 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0830 21:32:48.741292  975141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.014801098s)
	I0830 21:32:48.741304  975141 command_runner.go:130] > configmap/coredns replaced
	I0830 21:32:48.741332  975141 main.go:141] libmachine: Making call to close driver server
	I0830 21:32:48.741348  975141 main.go:141] libmachine: (multinode-752665) Calling .Close
	I0830 21:32:48.741332  975141 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0830 21:32:48.741526  975141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:32:48.741559  975141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:32:48.741590  975141 main.go:141] libmachine: Making call to close driver server
	I0830 21:32:48.741611  975141 main.go:141] libmachine: (multinode-752665) Calling .Close
	I0830 21:32:48.741722  975141 main.go:141] libmachine: (multinode-752665) DBG | Closing plugin on server side
	I0830 21:32:48.741741  975141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:32:48.741770  975141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:32:48.741787  975141 main.go:141] libmachine: Making call to close driver server
	I0830 21:32:48.741796  975141 main.go:141] libmachine: (multinode-752665) Calling .Close
	I0830 21:32:48.741824  975141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:32:48.741842  975141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:32:48.742142  975141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:32:48.742170  975141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:32:48.742189  975141 main.go:141] libmachine: Making call to close driver server
	I0830 21:32:48.742201  975141 main.go:141] libmachine: (multinode-752665) Calling .Close
	I0830 21:32:48.742399  975141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 21:32:48.742415  975141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 21:32:48.744435  975141 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0830 21:32:48.746138  975141 addons.go:502] enable addons completed in 1.296684333s: enabled=[storage-provisioner default-storageclass]
	I0830 21:32:48.854887  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:48.854909  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:48.854918  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:48.854925  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:48.857600  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:48.857622  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:48.857632  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:48.857640  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:48.857649  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:48 GMT
	I0830 21:32:48.857659  975141 round_trippers.go:580]     Audit-Id: 4e1cb04f-5d84-4f4f-a4ad-6197d4896ea3
	I0830 21:32:48.857668  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:48.857683  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:48.857833  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:49.355509  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:49.355534  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:49.355543  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:49.355549  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:49.358364  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:49.358381  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:49.358389  975141 round_trippers.go:580]     Audit-Id: 44fac2a6-6839-413c-bf61-a210f2db4119
	I0830 21:32:49.358395  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:49.358400  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:49.358406  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:49.358412  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:49.358419  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:49 GMT
	I0830 21:32:49.358636  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:49.855389  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:49.855414  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:49.855422  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:49.855428  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:49.858169  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:49.858201  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:49.858210  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:49.858215  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:49.858221  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:49 GMT
	I0830 21:32:49.858235  975141 round_trippers.go:580]     Audit-Id: 2243656d-3906-4764-8cf5-22c811828fe1
	I0830 21:32:49.858240  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:49.858245  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:49.858492  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:49.858854  975141 node_ready.go:58] node "multinode-752665" has status "Ready":"False"
	I0830 21:32:50.355256  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:50.355285  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:50.355298  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:50.355308  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:50.358107  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:50.358128  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:50.358136  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:50 GMT
	I0830 21:32:50.358142  975141 round_trippers.go:580]     Audit-Id: df261182-48e3-4272-97f0-4b4154fc6195
	I0830 21:32:50.358147  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:50.358153  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:50.358158  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:50.358171  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:50.358306  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:50.855043  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:50.855072  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:50.855085  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:50.855096  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:50.858138  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:50.858157  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:50.858170  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:50.858176  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:50.858181  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:50 GMT
	I0830 21:32:50.858187  975141 round_trippers.go:580]     Audit-Id: 635e3b6e-3be8-45a0-b7e3-a74434307f7b
	I0830 21:32:50.858192  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:50.858197  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:50.858355  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:51.355026  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:51.355055  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:51.355069  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:51.355079  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:51.358223  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:51.358248  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:51.358259  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:51.358268  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:51 GMT
	I0830 21:32:51.358278  975141 round_trippers.go:580]     Audit-Id: 94defc60-0e44-40a0-9e37-a09ff3a6ccb7
	I0830 21:32:51.358286  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:51.358294  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:51.358303  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:51.358424  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:51.855018  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:51.855040  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:51.855049  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:51.855055  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:51.858194  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:51.858218  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:51.858230  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:51.858236  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:51.858241  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:51.858247  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:51.858252  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:51 GMT
	I0830 21:32:51.858258  975141 round_trippers.go:580]     Audit-Id: 04347408-c4a3-4dd0-ad84-fe6402df2a09
	I0830 21:32:51.858469  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:51.858965  975141 node_ready.go:58] node "multinode-752665" has status "Ready":"False"
	I0830 21:32:52.355113  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:52.355138  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:52.355146  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:52.355152  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:52.357843  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:52.357880  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:52.357891  975141 round_trippers.go:580]     Audit-Id: 3f6cf099-10a8-43c8-ba5d-01c6b52ef099
	I0830 21:32:52.357900  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:52.357908  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:52.357915  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:52.357922  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:52.357930  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:52 GMT
	I0830 21:32:52.358199  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"310","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0830 21:32:52.855294  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:52.855319  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:52.855328  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:52.855335  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:52.857944  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:52.857964  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:52.857971  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:52.857976  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:52.857981  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:52.858006  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:52.858015  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:52 GMT
	I0830 21:32:52.858027  975141 round_trippers.go:580]     Audit-Id: d70771a1-789c-4c82-b8e8-f36a0d656247
	I0830 21:32:52.858942  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:52.859260  975141 node_ready.go:49] node "multinode-752665" has status "Ready":"True"
	I0830 21:32:52.859273  975141 node_ready.go:38] duration metric: took 5.100682727s waiting for node "multinode-752665" to be "Ready" ...
	I0830 21:32:52.859280  975141 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:32:52.859369  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:32:52.859376  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:52.859383  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:52.859390  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:52.862855  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:52.862870  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:52.862877  975141 round_trippers.go:580]     Audit-Id: 7c61ec01-d033-4c82-ace6-941886ae0c59
	I0830 21:32:52.862882  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:52.862887  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:52.862893  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:52.862898  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:52.862909  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:52 GMT
	I0830 21:32:52.863921  975141 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"391"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"390","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54554 chars]
	I0830 21:32:52.868553  975141 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:52.868636  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:32:52.868646  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:52.868657  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:52.868669  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:52.870901  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:52.870922  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:52.870931  975141 round_trippers.go:580]     Audit-Id: f2758d7d-3cab-44f3-ab05-aae69e7dd42a
	I0830 21:32:52.870937  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:52.870943  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:52.870948  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:52.870953  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:52.870965  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:52 GMT
	I0830 21:32:52.871312  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"390","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0830 21:32:52.871830  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:52.871845  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:52.871853  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:52.871862  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:52.873737  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:32:52.873749  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:52.873755  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:52.873761  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:52.873766  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:52 GMT
	I0830 21:32:52.873771  975141 round_trippers.go:580]     Audit-Id: c658d185-1ea7-481c-bf8a-c05767238932
	I0830 21:32:52.873776  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:52.873781  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:52.874060  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:52.874370  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:32:52.874389  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:52.874396  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:52.874402  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:52.876162  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:32:52.876178  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:52.876188  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:52.876198  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:52.876211  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:52 GMT
	I0830 21:32:52.876217  975141 round_trippers.go:580]     Audit-Id: 804824f5-9d85-44f3-b820-f9beff9d881d
	I0830 21:32:52.876222  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:52.876228  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:52.876368  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"390","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0830 21:32:52.876700  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:52.876712  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:52.876718  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:52.876724  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:52.878394  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:32:52.878407  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:52.878413  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:52.878418  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:52.878423  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:52.878428  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:52.878434  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:52 GMT
	I0830 21:32:52.878441  975141 round_trippers.go:580]     Audit-Id: 50373f83-c01f-41a6-a5da-eba25a9b2377
	I0830 21:32:52.878647  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:53.379631  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:32:53.379665  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:53.379676  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:53.379685  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:53.383471  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:53.383491  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:53.383509  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:53.383514  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:53.383520  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:53.383528  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:53.383533  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:53 GMT
	I0830 21:32:53.383539  975141 round_trippers.go:580]     Audit-Id: 22ccf9af-be12-4a95-af0d-0483b905f49c
	I0830 21:32:53.383729  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"390","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0830 21:32:53.384442  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:53.384463  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:53.384474  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:53.384482  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:53.388217  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:53.388232  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:53.388238  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:53.388244  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:53.388249  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:53.388257  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:53 GMT
	I0830 21:32:53.388267  975141 round_trippers.go:580]     Audit-Id: 3c5b05db-cef1-4373-963d-0b7023b07126
	I0830 21:32:53.388272  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:53.388448  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:53.879132  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:32:53.879163  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:53.879173  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:53.879178  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:53.883144  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:53.883171  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:53.883178  975141 round_trippers.go:580]     Audit-Id: 7631846d-b2e4-477f-94d0-2826350214bc
	I0830 21:32:53.883184  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:53.883189  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:53.883194  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:53.883200  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:53.883207  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:53 GMT
	I0830 21:32:53.883370  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"390","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0830 21:32:53.883890  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:53.883906  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:53.883918  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:53.883927  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:53.888438  975141 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:32:53.888455  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:53.888462  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:53.888468  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:53 GMT
	I0830 21:32:53.888473  975141 round_trippers.go:580]     Audit-Id: 2d4b60c3-a909-4fd8-9251-dbac2f2c996a
	I0830 21:32:53.888478  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:53.888484  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:53.888489  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:53.888635  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:54.379368  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:32:54.379398  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:54.379410  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:54.379418  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:54.382344  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:54.382368  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:54.382376  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:54.382381  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:54.382387  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:54.382392  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:54.382398  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:54 GMT
	I0830 21:32:54.382403  975141 round_trippers.go:580]     Audit-Id: bfb26e6a-c32c-49a2-899d-3cfd638c33b5
	I0830 21:32:54.382597  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"390","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0830 21:32:54.383127  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:54.383144  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:54.383155  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:54.383169  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:54.385411  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:54.385431  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:54.385441  975141 round_trippers.go:580]     Audit-Id: c890f939-75b1-47db-a127-25011745ebaa
	I0830 21:32:54.385450  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:54.385455  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:54.385464  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:54.385470  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:54.385475  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:54 GMT
	I0830 21:32:54.385638  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:54.879268  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:32:54.879292  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:54.879300  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:54.879307  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:54.882585  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:54.882606  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:54.882617  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:54.882625  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:54 GMT
	I0830 21:32:54.882634  975141 round_trippers.go:580]     Audit-Id: e6cdf0d6-68d5-4af2-ab97-96e8001288f1
	I0830 21:32:54.882644  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:54.882654  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:54.882670  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:54.882974  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"402","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0830 21:32:54.883449  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:54.883464  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:54.883472  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:54.883478  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:54.885510  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:54.885525  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:54.885534  975141 round_trippers.go:580]     Audit-Id: e2c82188-5f34-468b-a241-98774c5d951c
	I0830 21:32:54.885551  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:54.885567  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:54.885576  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:54.885586  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:54.885596  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:54 GMT
	I0830 21:32:54.885851  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:54.886174  975141 pod_ready.go:92] pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace has status "Ready":"True"
	I0830 21:32:54.886191  975141 pod_ready.go:81] duration metric: took 2.017614556s waiting for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:54.886200  975141 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:54.886255  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-752665
	I0830 21:32:54.886262  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:54.886269  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:54.886275  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:54.888234  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:32:54.888249  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:54.888258  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:54.888267  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:54.888275  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:54 GMT
	I0830 21:32:54.888291  975141 round_trippers.go:580]     Audit-Id: 65be02a5-b8f8-46f0-a3e3-1945f5ba6b3c
	I0830 21:32:54.888306  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:54.888315  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:54.888534  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-752665","namespace":"kube-system","uid":"25e2609d-f391-4e71-823a-c4fe8625092d","resourceVersion":"294","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.20:2379","kubernetes.io/config.hash":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.mirror":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.seen":"2023-08-30T21:32:35.235892298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6068 chars]
	I0830 21:32:54.888863  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:54.888874  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:54.888880  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:54.888886  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:54.890703  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:32:54.890717  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:54.890726  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:54.890735  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:54.890744  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:54.890759  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:54 GMT
	I0830 21:32:54.890769  975141 round_trippers.go:580]     Audit-Id: 98fdb46d-0222-4706-94cf-68ba6b5ba914
	I0830 21:32:54.890781  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:54.890986  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:54.891309  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-752665
	I0830 21:32:54.891321  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:54.891331  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:54.891356  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:54.893297  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:32:54.893316  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:54.893326  975141 round_trippers.go:580]     Audit-Id: e8e4d98c-4ad7-4730-8a17-00a6a593d5bf
	I0830 21:32:54.893334  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:54.893342  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:54.893351  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:54.893359  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:54.893365  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:54 GMT
	I0830 21:32:54.893525  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-752665","namespace":"kube-system","uid":"25e2609d-f391-4e71-823a-c4fe8625092d","resourceVersion":"294","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.20:2379","kubernetes.io/config.hash":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.mirror":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.seen":"2023-08-30T21:32:35.235892298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6068 chars]
	I0830 21:32:54.893838  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:54.893851  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:54.893858  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:54.893864  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:54.895827  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:32:54.895843  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:54.895849  975141 round_trippers.go:580]     Audit-Id: ecfb80af-e551-4ef6-8530-c060770701ca
	I0830 21:32:54.895854  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:54.895860  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:54.895865  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:54.895870  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:54.895876  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:54 GMT
	I0830 21:32:54.896127  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:55.397008  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-752665
	I0830 21:32:55.397031  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:55.397039  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:55.397045  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:55.399010  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:32:55.399036  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:55.399055  975141 round_trippers.go:580]     Audit-Id: 18884dc2-07a1-4be0-98ef-6e0501217bfa
	I0830 21:32:55.399061  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:55.399066  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:55.399072  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:55.399079  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:55.399088  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:55 GMT
	I0830 21:32:55.399457  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-752665","namespace":"kube-system","uid":"25e2609d-f391-4e71-823a-c4fe8625092d","resourceVersion":"407","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.20:2379","kubernetes.io/config.hash":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.mirror":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.seen":"2023-08-30T21:32:35.235892298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0830 21:32:55.400034  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:55.400058  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:55.400069  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:55.400078  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:55.405741  975141 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0830 21:32:55.405763  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:55.405773  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:55.405782  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:55 GMT
	I0830 21:32:55.405791  975141 round_trippers.go:580]     Audit-Id: 8ffa199c-adbd-4c19-ac93-655bdf625c1e
	I0830 21:32:55.405799  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:55.405806  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:55.405827  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:55.406023  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:55.406426  975141 pod_ready.go:92] pod "etcd-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:32:55.406444  975141 pod_ready.go:81] duration metric: took 520.238546ms waiting for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:55.406456  975141 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:55.406519  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-752665
	I0830 21:32:55.406527  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:55.406535  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:55.406540  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:55.410068  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:55.410091  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:55.410099  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:55.410107  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:55.410115  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:55 GMT
	I0830 21:32:55.410123  975141 round_trippers.go:580]     Audit-Id: 26dc3bba-ee6e-459c-9780-972d30dc18a3
	I0830 21:32:55.410130  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:55.410138  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:55.410342  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-752665","namespace":"kube-system","uid":"d813d11d-d0ec-4091-a72b-187bd44eabe3","resourceVersion":"408","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.20:8443","kubernetes.io/config.hash":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.mirror":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.seen":"2023-08-30T21:32:26.214498990Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0830 21:32:55.410804  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:55.410818  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:55.410825  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:55.410830  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:55.413324  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:55.413345  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:55.413356  975141 round_trippers.go:580]     Audit-Id: 6373c26d-d0c0-4fa9-a88c-cba0b2b6a914
	I0830 21:32:55.413364  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:55.413372  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:55.413379  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:55.413387  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:55.413407  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:55 GMT
	I0830 21:32:55.414033  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:55.414470  975141 pod_ready.go:92] pod "kube-apiserver-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:32:55.414490  975141 pod_ready.go:81] duration metric: took 8.028562ms waiting for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:55.414505  975141 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:55.455825  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-752665
	I0830 21:32:55.455846  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:55.455862  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:55.455868  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:55.458608  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:55.458632  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:55.458641  975141 round_trippers.go:580]     Audit-Id: aacf5992-865d-4d42-a93b-ae3dc867ba2d
	I0830 21:32:55.458650  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:55.458659  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:55.458667  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:55.458675  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:55.458682  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:55 GMT
	I0830 21:32:55.459231  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-752665","namespace":"kube-system","uid":"0391b35f-5177-412c-b7d4-073efb2de36b","resourceVersion":"409","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.mirror":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.seen":"2023-08-30T21:32:26.214500244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0830 21:32:55.656095  975141 request.go:629] Waited for 196.390982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:55.656176  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:55.656187  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:55.656196  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:55.656202  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:55.659465  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:55.659485  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:55.659492  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:55 GMT
	I0830 21:32:55.659497  975141 round_trippers.go:580]     Audit-Id: 57149860-47cf-42bc-9328-ee4b47705e31
	I0830 21:32:55.659503  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:55.659508  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:55.659513  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:55.659519  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:55.659732  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:55.660149  975141 pod_ready.go:92] pod "kube-controller-manager-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:32:55.660175  975141 pod_ready.go:81] duration metric: took 245.663191ms waiting for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:55.660185  975141 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:55.855617  975141 request.go:629] Waited for 195.349794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:32:55.855701  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:32:55.855713  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:55.855722  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:55.855728  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:55.858141  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:55.858179  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:55.858190  975141 round_trippers.go:580]     Audit-Id: f3001bda-aa8b-4e55-81d3-3068c233193c
	I0830 21:32:55.858199  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:55.858208  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:55.858216  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:55.858224  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:55.858234  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:55 GMT
	I0830 21:32:55.858390  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vltx5","generateName":"kube-proxy-","namespace":"kube-system","uid":"24ee271e-5778-4d0c-ab2c-77426f2673b3","resourceVersion":"375","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0830 21:32:56.056356  975141 request.go:629] Waited for 197.392878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:56.056419  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:56.056423  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:56.056431  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:56.056437  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:56.059044  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:56.059064  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:56.059071  975141 round_trippers.go:580]     Audit-Id: 22b77d13-d934-4bd5-af9d-589653e1f9af
	I0830 21:32:56.059076  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:56.059082  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:56.059087  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:56.059093  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:56.059098  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:56 GMT
	I0830 21:32:56.059254  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:56.059568  975141 pod_ready.go:92] pod "kube-proxy-vltx5" in "kube-system" namespace has status "Ready":"True"
	I0830 21:32:56.059581  975141 pod_ready.go:81] duration metric: took 399.389475ms waiting for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:56.059590  975141 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:56.256045  975141 request.go:629] Waited for 196.365661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:32:56.256119  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:32:56.256124  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:56.256132  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:56.256140  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:56.258869  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:56.258898  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:56.258906  975141 round_trippers.go:580]     Audit-Id: 1df4e883-720d-40ff-bb6f-f007eaf89fd8
	I0830 21:32:56.258913  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:56.258919  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:56.258925  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:56.258930  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:56.258936  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:56 GMT
	I0830 21:32:56.259063  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-752665","namespace":"kube-system","uid":"4c8a6a98-51b6-4010-9519-add75ab1a7a9","resourceVersion":"353","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.mirror":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.seen":"2023-08-30T21:32:35.235897289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0830 21:32:56.455908  975141 request.go:629] Waited for 196.412378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:56.455992  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:32:56.455999  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:56.456007  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:56.456016  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:56.458815  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:56.458843  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:56.458851  975141 round_trippers.go:580]     Audit-Id: 544a6b7c-0a71-4580-91b7-b2fc33c28cab
	I0830 21:32:56.458856  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:56.458867  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:56.458873  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:56.458878  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:56.458884  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:56 GMT
	I0830 21:32:56.459046  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:32:56.459387  975141 pod_ready.go:92] pod "kube-scheduler-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:32:56.459403  975141 pod_ready.go:81] duration metric: took 399.806633ms waiting for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:32:56.459413  975141 pod_ready.go:38] duration metric: took 3.600106911s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:32:56.459430  975141 api_server.go:52] waiting for apiserver process to appear ...
	I0830 21:32:56.459483  975141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:32:56.472899  975141 command_runner.go:130] > 1107
	I0830 21:32:56.472945  975141 api_server.go:72] duration metric: took 8.959831576s to wait for apiserver process to appear ...
	I0830 21:32:56.472960  975141 api_server.go:88] waiting for apiserver healthz status ...
	I0830 21:32:56.472988  975141 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:32:56.478239  975141 api_server.go:279] https://192.168.39.20:8443/healthz returned 200:
	ok
	I0830 21:32:56.478306  975141 round_trippers.go:463] GET https://192.168.39.20:8443/version
	I0830 21:32:56.478312  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:56.478330  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:56.478341  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:56.479393  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:32:56.479412  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:56.479420  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:56.479426  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:56.479431  975141 round_trippers.go:580]     Content-Length: 263
	I0830 21:32:56.479437  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:56 GMT
	I0830 21:32:56.479443  975141 round_trippers.go:580]     Audit-Id: 056839ff-1989-41f0-98f8-125fb2cdd5bd
	I0830 21:32:56.479448  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:56.479453  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:56.479477  975141 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0830 21:32:56.479572  975141 api_server.go:141] control plane version: v1.28.1
	I0830 21:32:56.479587  975141 api_server.go:131] duration metric: took 6.620161ms to wait for apiserver health ...
	I0830 21:32:56.479593  975141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 21:32:56.656055  975141 request.go:629] Waited for 176.365513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:32:56.656130  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:32:56.656135  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:56.656144  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:56.656150  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:56.660708  975141 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:32:56.660727  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:56.660734  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:56 GMT
	I0830 21:32:56.660740  975141 round_trippers.go:580]     Audit-Id: 0f9d2929-0f77-4961-806a-df7c3631fe1e
	I0830 21:32:56.660745  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:56.660750  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:56.660755  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:56.660760  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:56.662240  975141 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"402","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53957 chars]
	I0830 21:32:56.664059  975141 system_pods.go:59] 8 kube-system pods found
	I0830 21:32:56.664093  975141 system_pods.go:61] "coredns-5dd5756b68-zcppg" [4742270b-6c64-411b-bfb6-8c53211aa106] Running
	I0830 21:32:56.664099  975141 system_pods.go:61] "etcd-multinode-752665" [25e2609d-f391-4e71-823a-c4fe8625092d] Running
	I0830 21:32:56.664103  975141 system_pods.go:61] "kindnet-x5kk4" [2fdd77f6-856a-4400-b881-210549c588e2] Running
	I0830 21:32:56.664107  975141 system_pods.go:61] "kube-apiserver-multinode-752665" [d813d11d-d0ec-4091-a72b-187bd44eabe3] Running
	I0830 21:32:56.664111  975141 system_pods.go:61] "kube-controller-manager-multinode-752665" [0391b35f-5177-412c-b7d4-073efb2de36b] Running
	I0830 21:32:56.664116  975141 system_pods.go:61] "kube-proxy-vltx5" [24ee271e-5778-4d0c-ab2c-77426f2673b3] Running
	I0830 21:32:56.664120  975141 system_pods.go:61] "kube-scheduler-multinode-752665" [4c8a6a98-51b6-4010-9519-add75ab1a7a9] Running
	I0830 21:32:56.664123  975141 system_pods.go:61] "storage-provisioner" [67db5a8a-290a-40a7-b42e-212d99db812a] Running
	I0830 21:32:56.664129  975141 system_pods.go:74] duration metric: took 184.530843ms to wait for pod list to return data ...
	I0830 21:32:56.664139  975141 default_sa.go:34] waiting for default service account to be created ...
	I0830 21:32:56.855643  975141 request.go:629] Waited for 191.410398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/default/serviceaccounts
	I0830 21:32:56.855728  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/default/serviceaccounts
	I0830 21:32:56.855733  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:56.855742  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:56.855749  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:56.858539  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:56.858583  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:56.858594  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:56.858604  975141 round_trippers.go:580]     Content-Length: 261
	I0830 21:32:56.858613  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:56 GMT
	I0830 21:32:56.858622  975141 round_trippers.go:580]     Audit-Id: 667eeb03-7e0e-4e9d-8a91-ca6b5bfda9d5
	I0830 21:32:56.858631  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:56.858640  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:56.858653  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:56.858733  975141 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"59f61465-a2d3-4fe6-934b-a1516977e952","resourceVersion":"315","creationTimestamp":"2023-08-30T21:32:47Z"}}]}
	I0830 21:32:56.858953  975141 default_sa.go:45] found service account: "default"
	I0830 21:32:56.858967  975141 default_sa.go:55] duration metric: took 194.823962ms for default service account to be created ...
	I0830 21:32:56.858977  975141 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 21:32:57.055334  975141 request.go:629] Waited for 196.290499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:32:57.055434  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:32:57.055439  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:57.055447  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:57.055454  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:57.059173  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:32:57.059196  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:57.059206  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:57.059215  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:57.059224  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:57 GMT
	I0830 21:32:57.059232  975141 round_trippers.go:580]     Audit-Id: 9131af9f-1bd3-413a-b092-3bef1708ce50
	I0830 21:32:57.059241  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:57.059251  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:57.060472  975141 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"402","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53957 chars]
	I0830 21:32:57.062214  975141 system_pods.go:86] 8 kube-system pods found
	I0830 21:32:57.062233  975141 system_pods.go:89] "coredns-5dd5756b68-zcppg" [4742270b-6c64-411b-bfb6-8c53211aa106] Running
	I0830 21:32:57.062238  975141 system_pods.go:89] "etcd-multinode-752665" [25e2609d-f391-4e71-823a-c4fe8625092d] Running
	I0830 21:32:57.062244  975141 system_pods.go:89] "kindnet-x5kk4" [2fdd77f6-856a-4400-b881-210549c588e2] Running
	I0830 21:32:57.062249  975141 system_pods.go:89] "kube-apiserver-multinode-752665" [d813d11d-d0ec-4091-a72b-187bd44eabe3] Running
	I0830 21:32:57.062257  975141 system_pods.go:89] "kube-controller-manager-multinode-752665" [0391b35f-5177-412c-b7d4-073efb2de36b] Running
	I0830 21:32:57.062264  975141 system_pods.go:89] "kube-proxy-vltx5" [24ee271e-5778-4d0c-ab2c-77426f2673b3] Running
	I0830 21:32:57.062268  975141 system_pods.go:89] "kube-scheduler-multinode-752665" [4c8a6a98-51b6-4010-9519-add75ab1a7a9] Running
	I0830 21:32:57.062274  975141 system_pods.go:89] "storage-provisioner" [67db5a8a-290a-40a7-b42e-212d99db812a] Running
	I0830 21:32:57.062280  975141 system_pods.go:126] duration metric: took 203.299175ms to wait for k8s-apps to be running ...
	I0830 21:32:57.062289  975141 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 21:32:57.062332  975141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:32:57.074845  975141 system_svc.go:56] duration metric: took 12.548497ms WaitForService to wait for kubelet.
	I0830 21:32:57.074865  975141 kubeadm.go:581] duration metric: took 9.561754749s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 21:32:57.074893  975141 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:32:57.255279  975141 request.go:629] Waited for 180.306663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes
	I0830 21:32:57.255361  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes
	I0830 21:32:57.255366  975141 round_trippers.go:469] Request Headers:
	I0830 21:32:57.255374  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:32:57.255380  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:32:57.258114  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:32:57.258134  975141 round_trippers.go:577] Response Headers:
	I0830 21:32:57.258141  975141 round_trippers.go:580]     Audit-Id: a37f0aeb-d0da-47f5-8c53-0099eb0c4c05
	I0830 21:32:57.258147  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:32:57.258160  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:32:57.258168  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:32:57.258178  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:32:57.258187  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:32:57 GMT
	I0830 21:32:57.258481  975141 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I0830 21:32:57.258990  975141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:32:57.259016  975141 node_conditions.go:123] node cpu capacity is 2
	I0830 21:32:57.259027  975141 node_conditions.go:105] duration metric: took 184.130054ms to run NodePressure ...
	I0830 21:32:57.259038  975141 start.go:228] waiting for startup goroutines ...
	I0830 21:32:57.259044  975141 start.go:233] waiting for cluster config update ...
	I0830 21:32:57.259055  975141 start.go:242] writing updated cluster config ...
	I0830 21:32:57.261619  975141 out.go:177] 
	I0830 21:32:57.263222  975141 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:32:57.263299  975141 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:32:57.265063  975141 out.go:177] * Starting worker node multinode-752665-m02 in cluster multinode-752665
	I0830 21:32:57.266304  975141 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:32:57.266324  975141 cache.go:57] Caching tarball of preloaded images
	I0830 21:32:57.266402  975141 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 21:32:57.266412  975141 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 21:32:57.266495  975141 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:32:57.266643  975141 start.go:365] acquiring machines lock for multinode-752665-m02: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 21:32:57.266693  975141 start.go:369] acquired machines lock for "multinode-752665-m02" in 33.213µs
	I0830 21:32:57.266709  975141 start.go:93] Provisioning new machine with config: &{Name:multinode-752665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0830 21:32:57.266771  975141 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0830 21:32:57.268458  975141 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0830 21:32:57.268555  975141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:32:57.268583  975141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:32:57.283121  975141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0830 21:32:57.283545  975141 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:32:57.284089  975141 main.go:141] libmachine: Using API Version  1
	I0830 21:32:57.284111  975141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:32:57.284430  975141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:32:57.284625  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetMachineName
	I0830 21:32:57.284768  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:32:57.284923  975141 start.go:159] libmachine.API.Create for "multinode-752665" (driver="kvm2")
	I0830 21:32:57.284951  975141 client.go:168] LocalClient.Create starting
	I0830 21:32:57.284995  975141 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem
	I0830 21:32:57.285043  975141 main.go:141] libmachine: Decoding PEM data...
	I0830 21:32:57.285069  975141 main.go:141] libmachine: Parsing certificate...
	I0830 21:32:57.285141  975141 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem
	I0830 21:32:57.285175  975141 main.go:141] libmachine: Decoding PEM data...
	I0830 21:32:57.285195  975141 main.go:141] libmachine: Parsing certificate...
	I0830 21:32:57.285237  975141 main.go:141] libmachine: Running pre-create checks...
	I0830 21:32:57.285255  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .PreCreateCheck
	I0830 21:32:57.285425  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetConfigRaw
	I0830 21:32:57.285800  975141 main.go:141] libmachine: Creating machine...
	I0830 21:32:57.285816  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .Create
	I0830 21:32:57.285959  975141 main.go:141] libmachine: (multinode-752665-m02) Creating KVM machine...
	I0830 21:32:57.287052  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found existing default KVM network
	I0830 21:32:57.287245  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found existing private KVM network mk-multinode-752665
	I0830 21:32:57.287399  975141 main.go:141] libmachine: (multinode-752665-m02) Setting up store path in /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02 ...
	I0830 21:32:57.287423  975141 main.go:141] libmachine: (multinode-752665-m02) Building disk image from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 21:32:57.287492  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:32:57.287383  975531 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:32:57.287592  975141 main.go:141] libmachine: (multinode-752665-m02) Downloading /home/jenkins/minikube-integration/17114-955377/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0830 21:32:57.520215  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:32:57.520115  975531 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa...
	I0830 21:32:57.665416  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:32:57.665297  975531 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/multinode-752665-m02.rawdisk...
	I0830 21:32:57.665447  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Writing magic tar header
	I0830 21:32:57.665459  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Writing SSH key tar header
	I0830 21:32:57.665468  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:32:57.665417  975531 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02 ...
	I0830 21:32:57.665593  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02
	I0830 21:32:57.665620  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines
	I0830 21:32:57.665634  975141 main.go:141] libmachine: (multinode-752665-m02) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02 (perms=drwx------)
	I0830 21:32:57.665661  975141 main.go:141] libmachine: (multinode-752665-m02) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines (perms=drwxr-xr-x)
	I0830 21:32:57.665679  975141 main.go:141] libmachine: (multinode-752665-m02) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube (perms=drwxr-xr-x)
	I0830 21:32:57.665700  975141 main.go:141] libmachine: (multinode-752665-m02) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377 (perms=drwxrwxr-x)
	I0830 21:32:57.665721  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:32:57.665735  975141 main.go:141] libmachine: (multinode-752665-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0830 21:32:57.665749  975141 main.go:141] libmachine: (multinode-752665-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0830 21:32:57.665768  975141 main.go:141] libmachine: (multinode-752665-m02) Creating domain...
	I0830 21:32:57.665788  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377
	I0830 21:32:57.665802  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0830 21:32:57.665818  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Checking permissions on dir: /home/jenkins
	I0830 21:32:57.665832  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Checking permissions on dir: /home
	I0830 21:32:57.665848  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Skipping /home - not owner
	I0830 21:32:57.666733  975141 main.go:141] libmachine: (multinode-752665-m02) define libvirt domain using xml: 
	I0830 21:32:57.666753  975141 main.go:141] libmachine: (multinode-752665-m02) <domain type='kvm'>
	I0830 21:32:57.666760  975141 main.go:141] libmachine: (multinode-752665-m02)   <name>multinode-752665-m02</name>
	I0830 21:32:57.666766  975141 main.go:141] libmachine: (multinode-752665-m02)   <memory unit='MiB'>2200</memory>
	I0830 21:32:57.666774  975141 main.go:141] libmachine: (multinode-752665-m02)   <vcpu>2</vcpu>
	I0830 21:32:57.666782  975141 main.go:141] libmachine: (multinode-752665-m02)   <features>
	I0830 21:32:57.666795  975141 main.go:141] libmachine: (multinode-752665-m02)     <acpi/>
	I0830 21:32:57.666807  975141 main.go:141] libmachine: (multinode-752665-m02)     <apic/>
	I0830 21:32:57.666817  975141 main.go:141] libmachine: (multinode-752665-m02)     <pae/>
	I0830 21:32:57.666822  975141 main.go:141] libmachine: (multinode-752665-m02)     
	I0830 21:32:57.666829  975141 main.go:141] libmachine: (multinode-752665-m02)   </features>
	I0830 21:32:57.666834  975141 main.go:141] libmachine: (multinode-752665-m02)   <cpu mode='host-passthrough'>
	I0830 21:32:57.666858  975141 main.go:141] libmachine: (multinode-752665-m02)   
	I0830 21:32:57.666893  975141 main.go:141] libmachine: (multinode-752665-m02)   </cpu>
	I0830 21:32:57.666911  975141 main.go:141] libmachine: (multinode-752665-m02)   <os>
	I0830 21:32:57.666920  975141 main.go:141] libmachine: (multinode-752665-m02)     <type>hvm</type>
	I0830 21:32:57.666935  975141 main.go:141] libmachine: (multinode-752665-m02)     <boot dev='cdrom'/>
	I0830 21:32:57.666943  975141 main.go:141] libmachine: (multinode-752665-m02)     <boot dev='hd'/>
	I0830 21:32:57.666951  975141 main.go:141] libmachine: (multinode-752665-m02)     <bootmenu enable='no'/>
	I0830 21:32:57.666959  975141 main.go:141] libmachine: (multinode-752665-m02)   </os>
	I0830 21:32:57.666965  975141 main.go:141] libmachine: (multinode-752665-m02)   <devices>
	I0830 21:32:57.666973  975141 main.go:141] libmachine: (multinode-752665-m02)     <disk type='file' device='cdrom'>
	I0830 21:32:57.666991  975141 main.go:141] libmachine: (multinode-752665-m02)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/boot2docker.iso'/>
	I0830 21:32:57.667006  975141 main.go:141] libmachine: (multinode-752665-m02)       <target dev='hdc' bus='scsi'/>
	I0830 21:32:57.667037  975141 main.go:141] libmachine: (multinode-752665-m02)       <readonly/>
	I0830 21:32:57.667066  975141 main.go:141] libmachine: (multinode-752665-m02)     </disk>
	I0830 21:32:57.667082  975141 main.go:141] libmachine: (multinode-752665-m02)     <disk type='file' device='disk'>
	I0830 21:32:57.667101  975141 main.go:141] libmachine: (multinode-752665-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0830 21:32:57.667120  975141 main.go:141] libmachine: (multinode-752665-m02)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/multinode-752665-m02.rawdisk'/>
	I0830 21:32:57.667132  975141 main.go:141] libmachine: (multinode-752665-m02)       <target dev='hda' bus='virtio'/>
	I0830 21:32:57.667145  975141 main.go:141] libmachine: (multinode-752665-m02)     </disk>
	I0830 21:32:57.667163  975141 main.go:141] libmachine: (multinode-752665-m02)     <interface type='network'>
	I0830 21:32:57.667177  975141 main.go:141] libmachine: (multinode-752665-m02)       <source network='mk-multinode-752665'/>
	I0830 21:32:57.667191  975141 main.go:141] libmachine: (multinode-752665-m02)       <model type='virtio'/>
	I0830 21:32:57.667215  975141 main.go:141] libmachine: (multinode-752665-m02)     </interface>
	I0830 21:32:57.667236  975141 main.go:141] libmachine: (multinode-752665-m02)     <interface type='network'>
	I0830 21:32:57.667250  975141 main.go:141] libmachine: (multinode-752665-m02)       <source network='default'/>
	I0830 21:32:57.667264  975141 main.go:141] libmachine: (multinode-752665-m02)       <model type='virtio'/>
	I0830 21:32:57.667278  975141 main.go:141] libmachine: (multinode-752665-m02)     </interface>
	I0830 21:32:57.667290  975141 main.go:141] libmachine: (multinode-752665-m02)     <serial type='pty'>
	I0830 21:32:57.667300  975141 main.go:141] libmachine: (multinode-752665-m02)       <target port='0'/>
	I0830 21:32:57.667317  975141 main.go:141] libmachine: (multinode-752665-m02)     </serial>
	I0830 21:32:57.667331  975141 main.go:141] libmachine: (multinode-752665-m02)     <console type='pty'>
	I0830 21:32:57.667343  975141 main.go:141] libmachine: (multinode-752665-m02)       <target type='serial' port='0'/>
	I0830 21:32:57.667356  975141 main.go:141] libmachine: (multinode-752665-m02)     </console>
	I0830 21:32:57.667365  975141 main.go:141] libmachine: (multinode-752665-m02)     <rng model='virtio'>
	I0830 21:32:57.667375  975141 main.go:141] libmachine: (multinode-752665-m02)       <backend model='random'>/dev/random</backend>
	I0830 21:32:57.667387  975141 main.go:141] libmachine: (multinode-752665-m02)     </rng>
	I0830 21:32:57.667400  975141 main.go:141] libmachine: (multinode-752665-m02)     
	I0830 21:32:57.667443  975141 main.go:141] libmachine: (multinode-752665-m02)     
	I0830 21:32:57.667460  975141 main.go:141] libmachine: (multinode-752665-m02)   </devices>
	I0830 21:32:57.667471  975141 main.go:141] libmachine: (multinode-752665-m02) </domain>
	I0830 21:32:57.667480  975141 main.go:141] libmachine: (multinode-752665-m02) 
	I0830 21:32:57.674096  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:6c:79:de in network default
	I0830 21:32:57.674697  975141 main.go:141] libmachine: (multinode-752665-m02) Ensuring networks are active...
	I0830 21:32:57.674712  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:32:57.675411  975141 main.go:141] libmachine: (multinode-752665-m02) Ensuring network default is active
	I0830 21:32:57.675736  975141 main.go:141] libmachine: (multinode-752665-m02) Ensuring network mk-multinode-752665 is active
	I0830 21:32:57.676093  975141 main.go:141] libmachine: (multinode-752665-m02) Getting domain xml...
	I0830 21:32:57.676738  975141 main.go:141] libmachine: (multinode-752665-m02) Creating domain...
	I0830 21:32:58.885527  975141 main.go:141] libmachine: (multinode-752665-m02) Waiting to get IP...
	I0830 21:32:58.886185  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:32:58.886553  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:32:58.886589  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:32:58.886512  975531 retry.go:31] will retry after 302.484415ms: waiting for machine to come up
	I0830 21:32:59.191187  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:32:59.191629  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:32:59.191674  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:32:59.191570  975531 retry.go:31] will retry after 353.747934ms: waiting for machine to come up
	I0830 21:32:59.547327  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:32:59.547869  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:32:59.547894  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:32:59.547814  975531 retry.go:31] will retry after 392.806373ms: waiting for machine to come up
	I0830 21:32:59.942575  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:32:59.943004  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:32:59.943056  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:32:59.942963  975531 retry.go:31] will retry after 374.193407ms: waiting for machine to come up
	I0830 21:33:00.318494  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:00.318971  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:33:00.319011  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:33:00.318915  975531 retry.go:31] will retry after 607.448614ms: waiting for machine to come up
	I0830 21:33:00.928014  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:00.928528  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:33:00.928561  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:33:00.928492  975531 retry.go:31] will retry after 852.841301ms: waiting for machine to come up
	I0830 21:33:01.782448  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:01.782883  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:33:01.782908  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:33:01.782827  975531 retry.go:31] will retry after 983.36539ms: waiting for machine to come up
	I0830 21:33:02.768749  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:02.769192  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:33:02.769227  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:33:02.769108  975531 retry.go:31] will retry after 899.18343ms: waiting for machine to come up
	I0830 21:33:03.670257  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:03.670806  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:33:03.670831  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:33:03.670753  975531 retry.go:31] will retry after 1.675620416s: waiting for machine to come up
	I0830 21:33:05.348862  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:05.349305  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:33:05.349331  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:33:05.349272  975531 retry.go:31] will retry after 1.446789737s: waiting for machine to come up
	I0830 21:33:06.797156  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:06.797651  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:33:06.797688  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:33:06.797574  975531 retry.go:31] will retry after 2.055353451s: waiting for machine to come up
	I0830 21:33:08.854233  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:08.854635  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:33:08.854669  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:33:08.854588  975531 retry.go:31] will retry after 3.290683183s: waiting for machine to come up
	I0830 21:33:12.149001  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:12.149480  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:33:12.149514  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:33:12.149419  975531 retry.go:31] will retry after 2.85489338s: waiting for machine to come up
	I0830 21:33:15.006242  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:15.006599  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find current IP address of domain multinode-752665-m02 in network mk-multinode-752665
	I0830 21:33:15.006628  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | I0830 21:33:15.006555  975531 retry.go:31] will retry after 3.630853718s: waiting for machine to come up
	I0830 21:33:18.639616  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:18.640049  975141 main.go:141] libmachine: (multinode-752665-m02) Found IP for machine: 192.168.39.46
	I0830 21:33:18.640077  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has current primary IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:18.640084  975141 main.go:141] libmachine: (multinode-752665-m02) Reserving static IP address...
	I0830 21:33:18.640462  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | unable to find host DHCP lease matching {name: "multinode-752665-m02", mac: "52:54:00:63:5c:12", ip: "192.168.39.46"} in network mk-multinode-752665
	I0830 21:33:18.712416  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Getting to WaitForSSH function...
	I0830 21:33:18.712459  975141 main.go:141] libmachine: (multinode-752665-m02) Reserved static IP address: 192.168.39.46
	I0830 21:33:18.712475  975141 main.go:141] libmachine: (multinode-752665-m02) Waiting for SSH to be available...
	I0830 21:33:18.715426  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:18.716052  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:minikube Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:18.716112  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:18.716219  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Using SSH client type: external
	I0830 21:33:18.716250  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa (-rw-------)
	I0830 21:33:18.716306  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.46 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 21:33:18.716334  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | About to run SSH command:
	I0830 21:33:18.716347  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | exit 0
	I0830 21:33:18.807431  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | SSH cmd err, output: <nil>: 
	I0830 21:33:18.807660  975141 main.go:141] libmachine: (multinode-752665-m02) KVM machine creation complete!
	I0830 21:33:18.807998  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetConfigRaw
	I0830 21:33:18.808619  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:33:18.808791  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:33:18.808908  975141 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0830 21:33:18.808923  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetState
	I0830 21:33:18.810128  975141 main.go:141] libmachine: Detecting operating system of created instance...
	I0830 21:33:18.810142  975141 main.go:141] libmachine: Waiting for SSH to be available...
	I0830 21:33:18.810148  975141 main.go:141] libmachine: Getting to WaitForSSH function...
	I0830 21:33:18.810159  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:33:18.812764  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:18.813211  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:18.813245  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:18.813402  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:33:18.813630  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:18.813802  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:18.813912  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:33:18.814054  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:33:18.814538  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0830 21:33:18.814552  975141 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0830 21:33:18.931097  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:33:18.931125  975141 main.go:141] libmachine: Detecting the provisioner...
	I0830 21:33:18.931139  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:33:18.934138  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:18.934398  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:18.934435  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:18.934614  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:33:18.934875  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:18.935064  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:18.935272  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:33:18.935499  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:33:18.935985  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0830 21:33:18.936000  975141 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0830 21:33:19.056837  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0830 21:33:19.056948  975141 main.go:141] libmachine: found compatible host: buildroot
	I0830 21:33:19.056961  975141 main.go:141] libmachine: Provisioning with buildroot...
	I0830 21:33:19.056974  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetMachineName
	I0830 21:33:19.057274  975141 buildroot.go:166] provisioning hostname "multinode-752665-m02"
	I0830 21:33:19.057308  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetMachineName
	I0830 21:33:19.057525  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:33:19.060353  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.060726  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:19.060751  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.060909  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:33:19.061090  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:19.061233  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:19.061412  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:33:19.061612  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:33:19.062041  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0830 21:33:19.062058  975141 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-752665-m02 && echo "multinode-752665-m02" | sudo tee /etc/hostname
	I0830 21:33:19.193773  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-752665-m02
	
	I0830 21:33:19.193821  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:33:19.196824  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.197183  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:19.197215  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.197434  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:33:19.197628  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:19.197822  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:19.197952  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:33:19.198097  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:33:19.198540  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0830 21:33:19.198559  975141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-752665-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-752665-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-752665-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:33:19.324410  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:33:19.324446  975141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 21:33:19.324467  975141 buildroot.go:174] setting up certificates
	I0830 21:33:19.324477  975141 provision.go:83] configureAuth start
	I0830 21:33:19.324487  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetMachineName
	I0830 21:33:19.324790  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetIP
	I0830 21:33:19.327321  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.327716  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:19.327750  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.327912  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:33:19.330177  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.330501  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:19.330531  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.330633  975141 provision.go:138] copyHostCerts
	I0830 21:33:19.330669  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:33:19.330701  975141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 21:33:19.330710  975141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:33:19.330782  975141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 21:33:19.330888  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:33:19.330918  975141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 21:33:19.330928  975141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:33:19.330966  975141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 21:33:19.331042  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:33:19.331065  975141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 21:33:19.331074  975141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:33:19.331108  975141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 21:33:19.331173  975141 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.multinode-752665-m02 san=[192.168.39.46 192.168.39.46 localhost 127.0.0.1 minikube multinode-752665-m02]
	I0830 21:33:19.513011  975141 provision.go:172] copyRemoteCerts
	I0830 21:33:19.513069  975141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:33:19.513103  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:33:19.516007  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.516324  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:19.516350  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.516550  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:33:19.516752  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:19.516879  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:33:19.516977  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa Username:docker}
	I0830 21:33:19.609052  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 21:33:19.609138  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 21:33:19.632438  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 21:33:19.632533  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0830 21:33:19.655642  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 21:33:19.655722  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 21:33:19.679178  975141 provision.go:86] duration metric: configureAuth took 354.674144ms
	I0830 21:33:19.679209  975141 buildroot.go:189] setting minikube options for container-runtime
	I0830 21:33:19.679453  975141 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:33:19.679556  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:33:19.682025  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.682487  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:19.682528  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.682685  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:33:19.682884  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:19.683039  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:19.683143  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:33:19.683299  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:33:19.683719  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0830 21:33:19.683742  975141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:33:19.993010  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:33:19.993047  975141 main.go:141] libmachine: Checking connection to Docker...
	I0830 21:33:19.993058  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetURL
	I0830 21:33:19.994359  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | Using libvirt version 6000000
	I0830 21:33:19.996498  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.996870  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:19.996905  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.997040  975141 main.go:141] libmachine: Docker is up and running!
	I0830 21:33:19.997059  975141 main.go:141] libmachine: Reticulating splines...
	I0830 21:33:19.997067  975141 client.go:171] LocalClient.Create took 22.712107892s
	I0830 21:33:19.997096  975141 start.go:167] duration metric: libmachine.API.Create for "multinode-752665" took 22.712170841s
	I0830 21:33:19.997112  975141 start.go:300] post-start starting for "multinode-752665-m02" (driver="kvm2")
	I0830 21:33:19.997124  975141 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:33:19.997144  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:33:19.997404  975141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:33:19.997445  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:33:19.999587  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:19.999929  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:19.999961  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.000068  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:33:20.000263  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:20.000466  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:33:20.000633  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa Username:docker}
	I0830 21:33:20.094381  975141 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:33:20.098614  975141 command_runner.go:130] > NAME=Buildroot
	I0830 21:33:20.098643  975141 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0830 21:33:20.098658  975141 command_runner.go:130] > ID=buildroot
	I0830 21:33:20.098668  975141 command_runner.go:130] > VERSION_ID=2021.02.12
	I0830 21:33:20.098676  975141 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0830 21:33:20.098719  975141 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 21:33:20.098734  975141 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 21:33:20.098806  975141 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 21:33:20.098877  975141 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 21:33:20.098889  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /etc/ssl/certs/9626212.pem
	I0830 21:33:20.098998  975141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 21:33:20.108367  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:33:20.131156  975141 start.go:303] post-start completed in 134.031149ms
	I0830 21:33:20.131214  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetConfigRaw
	I0830 21:33:20.131894  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetIP
	I0830 21:33:20.134582  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.134967  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:20.134992  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.135255  975141 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:33:20.135446  975141 start.go:128] duration metric: createHost completed in 22.868665447s
	I0830 21:33:20.135469  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:33:20.137418  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.137743  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:20.137777  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.137875  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:33:20.138058  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:20.138206  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:20.138390  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:33:20.138526  975141 main.go:141] libmachine: Using SSH client type: native
	I0830 21:33:20.138913  975141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0830 21:33:20.138924  975141 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 21:33:20.256953  975141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693431200.232753633
	
	I0830 21:33:20.256981  975141 fix.go:206] guest clock: 1693431200.232753633
	I0830 21:33:20.256988  975141 fix.go:219] Guest: 2023-08-30 21:33:20.232753633 +0000 UTC Remote: 2023-08-30 21:33:20.135457642 +0000 UTC m=+92.619539439 (delta=97.295991ms)
	I0830 21:33:20.257005  975141 fix.go:190] guest clock delta is within tolerance: 97.295991ms
	I0830 21:33:20.257010  975141 start.go:83] releasing machines lock for "multinode-752665-m02", held for 22.990310101s
	I0830 21:33:20.257031  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:33:20.257373  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetIP
	I0830 21:33:20.259964  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.260314  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:20.260342  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.263147  975141 out.go:177] * Found network options:
	I0830 21:33:20.264963  975141 out.go:177]   - NO_PROXY=192.168.39.20
	W0830 21:33:20.266310  975141 proxy.go:119] fail to check proxy env: Error ip not in block
	I0830 21:33:20.266360  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:33:20.266939  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:33:20.267167  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:33:20.267277  975141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:33:20.267323  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	W0830 21:33:20.267417  975141 proxy.go:119] fail to check proxy env: Error ip not in block
	I0830 21:33:20.267516  975141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:33:20.267549  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:33:20.269975  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.270359  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:20.270390  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.270408  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.270568  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:33:20.270736  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:20.270872  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:33:20.270907  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:20.270951  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:20.271044  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa Username:docker}
	I0830 21:33:20.271118  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:33:20.271278  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:33:20.271431  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:33:20.271570  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa Username:docker}
	I0830 21:33:20.374209  975141 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0830 21:33:20.508300  975141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 21:33:20.514601  975141 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0830 21:33:20.514964  975141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 21:33:20.515043  975141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:33:20.529716  975141 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0830 21:33:20.529854  975141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 21:33:20.529876  975141 start.go:466] detecting cgroup driver to use...
	I0830 21:33:20.529955  975141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:33:20.543331  975141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:33:20.555198  975141 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:33:20.555278  975141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:33:20.567306  975141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:33:20.578917  975141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:33:20.591688  975141 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0830 21:33:20.678337  975141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:33:20.692278  975141 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0830 21:33:20.793413  975141 docker.go:212] disabling docker service ...
	I0830 21:33:20.793479  975141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:33:20.806225  975141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:33:20.817248  975141 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0830 21:33:20.817525  975141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:33:20.830349  975141 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0830 21:33:20.922498  975141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:33:21.028427  975141 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0830 21:33:21.028462  975141 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0830 21:33:21.028527  975141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:33:21.040434  975141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:33:21.056980  975141 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0830 21:33:21.057018  975141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 21:33:21.057066  975141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:33:21.065983  975141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:33:21.066047  975141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:33:21.074839  975141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:33:21.083609  975141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:33:21.092392  975141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:33:21.101614  975141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:33:21.109239  975141 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 21:33:21.109530  975141 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 21:33:21.109593  975141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 21:33:21.122125  975141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:33:21.130542  975141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:33:21.228452  975141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:33:21.400874  975141 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:33:21.400947  975141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:33:21.410385  975141 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0830 21:33:21.410419  975141 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0830 21:33:21.410431  975141 command_runner.go:130] > Device: 16h/22d	Inode: 719         Links: 1
	I0830 21:33:21.410442  975141 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 21:33:21.410450  975141 command_runner.go:130] > Access: 2023-08-30 21:33:21.364884768 +0000
	I0830 21:33:21.410462  975141 command_runner.go:130] > Modify: 2023-08-30 21:33:21.364884768 +0000
	I0830 21:33:21.410473  975141 command_runner.go:130] > Change: 2023-08-30 21:33:21.364884768 +0000
	I0830 21:33:21.410479  975141 command_runner.go:130] >  Birth: -
	I0830 21:33:21.410507  975141 start.go:534] Will wait 60s for crictl version
	I0830 21:33:21.410569  975141 ssh_runner.go:195] Run: which crictl
	I0830 21:33:21.414090  975141 command_runner.go:130] > /usr/bin/crictl
	I0830 21:33:21.414314  975141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:33:21.443450  975141 command_runner.go:130] > Version:  0.1.0
	I0830 21:33:21.443479  975141 command_runner.go:130] > RuntimeName:  cri-o
	I0830 21:33:21.443486  975141 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0830 21:33:21.443499  975141 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0830 21:33:21.443519  975141 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 21:33:21.443596  975141 ssh_runner.go:195] Run: crio --version
	I0830 21:33:21.491052  975141 command_runner.go:130] > crio version 1.24.1
	I0830 21:33:21.491082  975141 command_runner.go:130] > Version:          1.24.1
	I0830 21:33:21.491092  975141 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0830 21:33:21.491098  975141 command_runner.go:130] > GitTreeState:     dirty
	I0830 21:33:21.491107  975141 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0830 21:33:21.491113  975141 command_runner.go:130] > GoVersion:        go1.19.9
	I0830 21:33:21.491119  975141 command_runner.go:130] > Compiler:         gc
	I0830 21:33:21.491126  975141 command_runner.go:130] > Platform:         linux/amd64
	I0830 21:33:21.491134  975141 command_runner.go:130] > Linkmode:         dynamic
	I0830 21:33:21.491157  975141 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 21:33:21.491168  975141 command_runner.go:130] > SeccompEnabled:   true
	I0830 21:33:21.491174  975141 command_runner.go:130] > AppArmorEnabled:  false
	I0830 21:33:21.492518  975141 ssh_runner.go:195] Run: crio --version
	I0830 21:33:21.538961  975141 command_runner.go:130] > crio version 1.24.1
	I0830 21:33:21.538986  975141 command_runner.go:130] > Version:          1.24.1
	I0830 21:33:21.538997  975141 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0830 21:33:21.539004  975141 command_runner.go:130] > GitTreeState:     dirty
	I0830 21:33:21.539020  975141 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0830 21:33:21.539028  975141 command_runner.go:130] > GoVersion:        go1.19.9
	I0830 21:33:21.539036  975141 command_runner.go:130] > Compiler:         gc
	I0830 21:33:21.539044  975141 command_runner.go:130] > Platform:         linux/amd64
	I0830 21:33:21.539052  975141 command_runner.go:130] > Linkmode:         dynamic
	I0830 21:33:21.539069  975141 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 21:33:21.539079  975141 command_runner.go:130] > SeccompEnabled:   true
	I0830 21:33:21.539089  975141 command_runner.go:130] > AppArmorEnabled:  false
	I0830 21:33:21.542425  975141 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 21:33:21.543796  975141 out.go:177]   - env NO_PROXY=192.168.39.20
	I0830 21:33:21.545065  975141 main.go:141] libmachine: (multinode-752665-m02) Calling .GetIP
	I0830 21:33:21.547474  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:21.547749  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:33:21.547796  975141 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:33:21.547958  975141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 21:33:21.551991  975141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:33:21.564179  975141 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665 for IP: 192.168.39.46
	I0830 21:33:21.564216  975141 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:33:21.564380  975141 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 21:33:21.564438  975141 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 21:33:21.564451  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 21:33:21.564465  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 21:33:21.564476  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 21:33:21.564490  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 21:33:21.564546  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 21:33:21.564574  975141 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 21:33:21.564584  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 21:33:21.564605  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 21:33:21.564628  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:33:21.564656  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 21:33:21.564699  975141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:33:21.564723  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:33:21.564735  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem -> /usr/share/ca-certificates/962621.pem
	I0830 21:33:21.564746  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /usr/share/ca-certificates/9626212.pem
	I0830 21:33:21.565167  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:33:21.587621  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:33:21.609677  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:33:21.631839  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 21:33:21.654101  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:33:21.676166  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 21:33:21.698377  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 21:33:21.720926  975141 ssh_runner.go:195] Run: openssl version
	I0830 21:33:21.726409  975141 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0830 21:33:21.726495  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 21:33:21.736153  975141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 21:33:21.740518  975141 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:33:21.740683  975141 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:33:21.740734  975141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 21:33:21.746078  975141 command_runner.go:130] > 51391683
	I0830 21:33:21.746349  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 21:33:21.756474  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 21:33:21.766575  975141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 21:33:21.771195  975141 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:33:21.771269  975141 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:33:21.771338  975141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 21:33:21.776514  975141 command_runner.go:130] > 3ec20f2e
	I0830 21:33:21.776916  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 21:33:21.786502  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:33:21.796116  975141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:33:21.800531  975141 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:33:21.800784  975141 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:33:21.800831  975141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:33:21.806060  975141 command_runner.go:130] > b5213941
	I0830 21:33:21.806317  975141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:33:21.818269  975141 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:33:21.822168  975141 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:33:21.822476  975141 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:33:21.822573  975141 ssh_runner.go:195] Run: crio config
	I0830 21:33:21.879222  975141 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0830 21:33:21.879249  975141 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0830 21:33:21.879259  975141 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0830 21:33:21.879262  975141 command_runner.go:130] > #
	I0830 21:33:21.879269  975141 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0830 21:33:21.879279  975141 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0830 21:33:21.879289  975141 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0830 21:33:21.879302  975141 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0830 21:33:21.879308  975141 command_runner.go:130] > # reload'.
	I0830 21:33:21.879319  975141 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0830 21:33:21.879327  975141 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0830 21:33:21.879336  975141 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0830 21:33:21.879342  975141 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0830 21:33:21.879348  975141 command_runner.go:130] > [crio]
	I0830 21:33:21.879353  975141 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0830 21:33:21.879359  975141 command_runner.go:130] > # containers images, in this directory.
	I0830 21:33:21.879367  975141 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0830 21:33:21.879378  975141 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0830 21:33:21.879388  975141 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0830 21:33:21.879402  975141 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0830 21:33:21.879416  975141 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0830 21:33:21.879430  975141 command_runner.go:130] > storage_driver = "overlay"
	I0830 21:33:21.879443  975141 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0830 21:33:21.879451  975141 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0830 21:33:21.879458  975141 command_runner.go:130] > storage_option = [
	I0830 21:33:21.879482  975141 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0830 21:33:21.879491  975141 command_runner.go:130] > ]
	I0830 21:33:21.879501  975141 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0830 21:33:21.879515  975141 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0830 21:33:21.879526  975141 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0830 21:33:21.879535  975141 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0830 21:33:21.879545  975141 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0830 21:33:21.879551  975141 command_runner.go:130] > # always happen on a node reboot
	I0830 21:33:21.879560  975141 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0830 21:33:21.879566  975141 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0830 21:33:21.879573  975141 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0830 21:33:21.879585  975141 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0830 21:33:21.879592  975141 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0830 21:33:21.879602  975141 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0830 21:33:21.879612  975141 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0830 21:33:21.879616  975141 command_runner.go:130] > # internal_wipe = true
	I0830 21:33:21.879624  975141 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0830 21:33:21.879630  975141 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0830 21:33:21.879640  975141 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0830 21:33:21.879656  975141 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0830 21:33:21.879668  975141 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0830 21:33:21.879674  975141 command_runner.go:130] > [crio.api]
	I0830 21:33:21.879686  975141 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0830 21:33:21.879697  975141 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0830 21:33:21.879706  975141 command_runner.go:130] > # IP address on which the stream server will listen.
	I0830 21:33:21.879735  975141 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0830 21:33:21.879751  975141 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0830 21:33:21.879757  975141 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0830 21:33:21.879764  975141 command_runner.go:130] > # stream_port = "0"
	I0830 21:33:21.879782  975141 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0830 21:33:21.879793  975141 command_runner.go:130] > # stream_enable_tls = false
	I0830 21:33:21.879804  975141 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0830 21:33:21.879814  975141 command_runner.go:130] > # stream_idle_timeout = ""
	I0830 21:33:21.879825  975141 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0830 21:33:21.879839  975141 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0830 21:33:21.879847  975141 command_runner.go:130] > # minutes.
	I0830 21:33:21.879852  975141 command_runner.go:130] > # stream_tls_cert = ""
	I0830 21:33:21.879858  975141 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0830 21:33:21.879869  975141 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0830 21:33:21.879878  975141 command_runner.go:130] > # stream_tls_key = ""
	I0830 21:33:21.879888  975141 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0830 21:33:21.879906  975141 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0830 21:33:21.879918  975141 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0830 21:33:21.879927  975141 command_runner.go:130] > # stream_tls_ca = ""
	I0830 21:33:21.879944  975141 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 21:33:21.879955  975141 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0830 21:33:21.879970  975141 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 21:33:21.879981  975141 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0830 21:33:21.880002  975141 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0830 21:33:21.880016  975141 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0830 21:33:21.880026  975141 command_runner.go:130] > [crio.runtime]
	I0830 21:33:21.880036  975141 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0830 21:33:21.880049  975141 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0830 21:33:21.880059  975141 command_runner.go:130] > # "nofile=1024:2048"
	I0830 21:33:21.880072  975141 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0830 21:33:21.880082  975141 command_runner.go:130] > # default_ulimits = [
	I0830 21:33:21.880088  975141 command_runner.go:130] > # ]
	I0830 21:33:21.880101  975141 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0830 21:33:21.880112  975141 command_runner.go:130] > # no_pivot = false
	I0830 21:33:21.880123  975141 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0830 21:33:21.880138  975141 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0830 21:33:21.880149  975141 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0830 21:33:21.880162  975141 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0830 21:33:21.880171  975141 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0830 21:33:21.880183  975141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 21:33:21.880216  975141 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0830 21:33:21.880233  975141 command_runner.go:130] > # Cgroup setting for conmon
	I0830 21:33:21.880245  975141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0830 21:33:21.880252  975141 command_runner.go:130] > conmon_cgroup = "pod"
	I0830 21:33:21.880267  975141 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0830 21:33:21.880279  975141 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0830 21:33:21.880294  975141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 21:33:21.880304  975141 command_runner.go:130] > conmon_env = [
	I0830 21:33:21.880316  975141 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0830 21:33:21.880324  975141 command_runner.go:130] > ]
	I0830 21:33:21.880336  975141 command_runner.go:130] > # Additional environment variables to set for all the
	I0830 21:33:21.880347  975141 command_runner.go:130] > # containers. These are overridden if set in the
	I0830 21:33:21.880357  975141 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0830 21:33:21.880367  975141 command_runner.go:130] > # default_env = [
	I0830 21:33:21.880376  975141 command_runner.go:130] > # ]
	I0830 21:33:21.880390  975141 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0830 21:33:21.880400  975141 command_runner.go:130] > # selinux = false
	I0830 21:33:21.880414  975141 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0830 21:33:21.880428  975141 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0830 21:33:21.880440  975141 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0830 21:33:21.880447  975141 command_runner.go:130] > # seccomp_profile = ""
	I0830 21:33:21.880459  975141 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0830 21:33:21.880473  975141 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0830 21:33:21.880485  975141 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0830 21:33:21.880495  975141 command_runner.go:130] > # which might increase security.
	I0830 21:33:21.880507  975141 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0830 21:33:21.880520  975141 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0830 21:33:21.880534  975141 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0830 21:33:21.880547  975141 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0830 21:33:21.880562  975141 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0830 21:33:21.880575  975141 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:33:21.880585  975141 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0830 21:33:21.880595  975141 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0830 21:33:21.880606  975141 command_runner.go:130] > # the cgroup blockio controller.
	I0830 21:33:21.880616  975141 command_runner.go:130] > # blockio_config_file = ""
	I0830 21:33:21.880630  975141 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0830 21:33:21.880640  975141 command_runner.go:130] > # irqbalance daemon.
	I0830 21:33:21.880649  975141 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0830 21:33:21.880667  975141 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0830 21:33:21.880675  975141 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:33:21.880715  975141 command_runner.go:130] > # rdt_config_file = ""
	I0830 21:33:21.880728  975141 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0830 21:33:21.880734  975141 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0830 21:33:21.880743  975141 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0830 21:33:21.880751  975141 command_runner.go:130] > # separate_pull_cgroup = ""
	I0830 21:33:21.880762  975141 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0830 21:33:21.880774  975141 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0830 21:33:21.880784  975141 command_runner.go:130] > # will be added.
	I0830 21:33:21.880791  975141 command_runner.go:130] > # default_capabilities = [
	I0830 21:33:21.880801  975141 command_runner.go:130] > # 	"CHOWN",
	I0830 21:33:21.880811  975141 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0830 21:33:21.880820  975141 command_runner.go:130] > # 	"FSETID",
	I0830 21:33:21.880830  975141 command_runner.go:130] > # 	"FOWNER",
	I0830 21:33:21.880837  975141 command_runner.go:130] > # 	"SETGID",
	I0830 21:33:21.880846  975141 command_runner.go:130] > # 	"SETUID",
	I0830 21:33:21.880854  975141 command_runner.go:130] > # 	"SETPCAP",
	I0830 21:33:21.880866  975141 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0830 21:33:21.880875  975141 command_runner.go:130] > # 	"KILL",
	I0830 21:33:21.880881  975141 command_runner.go:130] > # ]
	I0830 21:33:21.880895  975141 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0830 21:33:21.880909  975141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 21:33:21.880919  975141 command_runner.go:130] > # default_sysctls = [
	I0830 21:33:21.880924  975141 command_runner.go:130] > # ]
	I0830 21:33:21.880935  975141 command_runner.go:130] > # List of devices on the host that a
	I0830 21:33:21.880948  975141 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0830 21:33:21.880955  975141 command_runner.go:130] > # allowed_devices = [
	I0830 21:33:21.880965  975141 command_runner.go:130] > # 	"/dev/fuse",
	I0830 21:33:21.880970  975141 command_runner.go:130] > # ]
	I0830 21:33:21.880980  975141 command_runner.go:130] > # List of additional devices. specified as
	I0830 21:33:21.880995  975141 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0830 21:33:21.881007  975141 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0830 21:33:21.881050  975141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 21:33:21.881061  975141 command_runner.go:130] > # additional_devices = [
	I0830 21:33:21.881066  975141 command_runner.go:130] > # ]
	I0830 21:33:21.881078  975141 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0830 21:33:21.881088  975141 command_runner.go:130] > # cdi_spec_dirs = [
	I0830 21:33:21.881115  975141 command_runner.go:130] > # 	"/etc/cdi",
	I0830 21:33:21.881126  975141 command_runner.go:130] > # 	"/var/run/cdi",
	I0830 21:33:21.881132  975141 command_runner.go:130] > # ]
	I0830 21:33:21.881146  975141 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0830 21:33:21.881159  975141 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0830 21:33:21.881168  975141 command_runner.go:130] > # Defaults to false.
	I0830 21:33:21.881176  975141 command_runner.go:130] > # device_ownership_from_security_context = false
	I0830 21:33:21.881187  975141 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0830 21:33:21.881199  975141 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0830 21:33:21.881208  975141 command_runner.go:130] > # hooks_dir = [
	I0830 21:33:21.881216  975141 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0830 21:33:21.881225  975141 command_runner.go:130] > # ]
	I0830 21:33:21.881235  975141 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0830 21:33:21.881248  975141 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0830 21:33:21.881258  975141 command_runner.go:130] > # its default mounts from the following two files:
	I0830 21:33:21.881264  975141 command_runner.go:130] > #
	I0830 21:33:21.881276  975141 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0830 21:33:21.881289  975141 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0830 21:33:21.881301  975141 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0830 21:33:21.881309  975141 command_runner.go:130] > #
	I0830 21:33:21.881320  975141 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0830 21:33:21.881332  975141 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0830 21:33:21.881346  975141 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0830 21:33:21.881358  975141 command_runner.go:130] > #      only add mounts it finds in this file.
	I0830 21:33:21.881367  975141 command_runner.go:130] > #
	I0830 21:33:21.881374  975141 command_runner.go:130] > # default_mounts_file = ""
	I0830 21:33:21.881386  975141 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0830 21:33:21.881400  975141 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0830 21:33:21.881409  975141 command_runner.go:130] > pids_limit = 1024
	I0830 21:33:21.881420  975141 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0830 21:33:21.881434  975141 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0830 21:33:21.881447  975141 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0830 21:33:21.881460  975141 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0830 21:33:21.881469  975141 command_runner.go:130] > # log_size_max = -1
	I0830 21:33:21.881477  975141 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0830 21:33:21.881483  975141 command_runner.go:130] > # log_to_journald = false
	I0830 21:33:21.881489  975141 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0830 21:33:21.881496  975141 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0830 21:33:21.881502  975141 command_runner.go:130] > # Path to directory for container attach sockets.
	I0830 21:33:21.881509  975141 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0830 21:33:21.881514  975141 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0830 21:33:21.881520  975141 command_runner.go:130] > # bind_mount_prefix = ""
	I0830 21:33:21.881528  975141 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0830 21:33:21.881537  975141 command_runner.go:130] > # read_only = false
	I0830 21:33:21.881548  975141 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0830 21:33:21.881561  975141 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0830 21:33:21.881571  975141 command_runner.go:130] > # live configuration reload.
	I0830 21:33:21.881581  975141 command_runner.go:130] > # log_level = "info"
	I0830 21:33:21.881591  975141 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0830 21:33:21.881603  975141 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:33:21.881612  975141 command_runner.go:130] > # log_filter = ""
	I0830 21:33:21.881619  975141 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0830 21:33:21.881636  975141 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0830 21:33:21.881647  975141 command_runner.go:130] > # separated by comma.
	I0830 21:33:21.881686  975141 command_runner.go:130] > # uid_mappings = ""
	I0830 21:33:21.881700  975141 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0830 21:33:21.881709  975141 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0830 21:33:21.881714  975141 command_runner.go:130] > # separated by comma.
	I0830 21:33:21.881725  975141 command_runner.go:130] > # gid_mappings = ""
	I0830 21:33:21.881736  975141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0830 21:33:21.881750  975141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 21:33:21.881764  975141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 21:33:21.881775  975141 command_runner.go:130] > # minimum_mappable_uid = -1
	I0830 21:33:21.881787  975141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0830 21:33:21.881803  975141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 21:33:21.881815  975141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 21:33:21.881825  975141 command_runner.go:130] > # minimum_mappable_gid = -1
	I0830 21:33:21.881837  975141 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0830 21:33:21.881849  975141 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0830 21:33:21.881862  975141 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0830 21:33:21.881872  975141 command_runner.go:130] > # ctr_stop_timeout = 30
	I0830 21:33:21.881881  975141 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0830 21:33:21.881892  975141 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0830 21:33:21.881901  975141 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0830 21:33:21.881909  975141 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0830 21:33:21.881940  975141 command_runner.go:130] > drop_infra_ctr = false
	I0830 21:33:21.881954  975141 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0830 21:33:21.881965  975141 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0830 21:33:21.881975  975141 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0830 21:33:21.881984  975141 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0830 21:33:21.881994  975141 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0830 21:33:21.882003  975141 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0830 21:33:21.882009  975141 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0830 21:33:21.882023  975141 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0830 21:33:21.882032  975141 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0830 21:33:21.882041  975141 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0830 21:33:21.882054  975141 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0830 21:33:21.882066  975141 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0830 21:33:21.882084  975141 command_runner.go:130] > # default_runtime = "runc"
	I0830 21:33:21.882096  975141 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0830 21:33:21.882107  975141 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0830 21:33:21.882125  975141 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0830 21:33:21.882137  975141 command_runner.go:130] > # creation as a file is not desired either.
	I0830 21:33:21.882155  975141 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0830 21:33:21.882163  975141 command_runner.go:130] > # the hostname is being managed dynamically.
	I0830 21:33:21.882173  975141 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0830 21:33:21.882178  975141 command_runner.go:130] > # ]
	I0830 21:33:21.882187  975141 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0830 21:33:21.882202  975141 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0830 21:33:21.882216  975141 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0830 21:33:21.882230  975141 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0830 21:33:21.882238  975141 command_runner.go:130] > #
	I0830 21:33:21.882247  975141 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0830 21:33:21.882258  975141 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0830 21:33:21.882269  975141 command_runner.go:130] > #  runtime_type = "oci"
	I0830 21:33:21.882278  975141 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0830 21:33:21.882289  975141 command_runner.go:130] > #  privileged_without_host_devices = false
	I0830 21:33:21.882300  975141 command_runner.go:130] > #  allowed_annotations = []
	I0830 21:33:21.882306  975141 command_runner.go:130] > # Where:
	I0830 21:33:21.882320  975141 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0830 21:33:21.882335  975141 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0830 21:33:21.882350  975141 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0830 21:33:21.882364  975141 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0830 21:33:21.882374  975141 command_runner.go:130] > #   in $PATH.
	I0830 21:33:21.882384  975141 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0830 21:33:21.882395  975141 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0830 21:33:21.882410  975141 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0830 21:33:21.882420  975141 command_runner.go:130] > #   state.
	I0830 21:33:21.882431  975141 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0830 21:33:21.882445  975141 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0830 21:33:21.882459  975141 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0830 21:33:21.882471  975141 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0830 21:33:21.882485  975141 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0830 21:33:21.882496  975141 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0830 21:33:21.882534  975141 command_runner.go:130] > #   The currently recognized values are:
	I0830 21:33:21.882549  975141 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0830 21:33:21.882561  975141 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0830 21:33:21.882576  975141 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0830 21:33:21.882590  975141 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0830 21:33:21.882606  975141 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0830 21:33:21.882619  975141 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0830 21:33:21.882633  975141 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0830 21:33:21.882648  975141 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0830 21:33:21.882666  975141 command_runner.go:130] > #   should be moved to the container's cgroup
	I0830 21:33:21.882674  975141 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0830 21:33:21.882685  975141 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0830 21:33:21.882692  975141 command_runner.go:130] > runtime_type = "oci"
	I0830 21:33:21.882701  975141 command_runner.go:130] > runtime_root = "/run/runc"
	I0830 21:33:21.882707  975141 command_runner.go:130] > runtime_config_path = ""
	I0830 21:33:21.882715  975141 command_runner.go:130] > monitor_path = ""
	I0830 21:33:21.882721  975141 command_runner.go:130] > monitor_cgroup = ""
	I0830 21:33:21.882731  975141 command_runner.go:130] > monitor_exec_cgroup = ""
	I0830 21:33:21.882744  975141 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0830 21:33:21.882755  975141 command_runner.go:130] > # running containers
	I0830 21:33:21.882766  975141 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0830 21:33:21.882776  975141 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0830 21:33:21.882812  975141 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0830 21:33:21.882824  975141 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0830 21:33:21.882835  975141 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0830 21:33:21.882847  975141 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0830 21:33:21.882858  975141 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0830 21:33:21.882865  975141 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0830 21:33:21.882875  975141 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0830 21:33:21.882882  975141 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0830 21:33:21.882895  975141 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0830 21:33:21.882907  975141 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0830 21:33:21.882920  975141 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0830 21:33:21.882936  975141 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0830 21:33:21.882952  975141 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0830 21:33:21.882962  975141 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0830 21:33:21.882982  975141 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0830 21:33:21.882998  975141 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0830 21:33:21.883010  975141 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0830 21:33:21.883025  975141 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0830 21:33:21.883034  975141 command_runner.go:130] > # Example:
	I0830 21:33:21.883041  975141 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0830 21:33:21.883053  975141 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0830 21:33:21.883065  975141 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0830 21:33:21.883077  975141 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0830 21:33:21.883086  975141 command_runner.go:130] > # cpuset = 0
	I0830 21:33:21.883093  975141 command_runner.go:130] > # cpushares = "0-1"
	I0830 21:33:21.883101  975141 command_runner.go:130] > # Where:
	I0830 21:33:21.883108  975141 command_runner.go:130] > # The workload name is workload-type.
	I0830 21:33:21.883118  975141 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0830 21:33:21.883123  975141 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0830 21:33:21.883131  975141 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0830 21:33:21.883139  975141 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0830 21:33:21.883147  975141 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0830 21:33:21.883151  975141 command_runner.go:130] > # 
	I0830 21:33:21.883159  975141 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0830 21:33:21.883163  975141 command_runner.go:130] > #
	I0830 21:33:21.883169  975141 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0830 21:33:21.883177  975141 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0830 21:33:21.883183  975141 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0830 21:33:21.883189  975141 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0830 21:33:21.883195  975141 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0830 21:33:21.883199  975141 command_runner.go:130] > [crio.image]
	I0830 21:33:21.883207  975141 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0830 21:33:21.883211  975141 command_runner.go:130] > # default_transport = "docker://"
	I0830 21:33:21.883217  975141 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0830 21:33:21.883226  975141 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0830 21:33:21.883230  975141 command_runner.go:130] > # global_auth_file = ""
	I0830 21:33:21.883237  975141 command_runner.go:130] > # The image used to instantiate infra containers.
	I0830 21:33:21.883242  975141 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:33:21.883273  975141 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0830 21:33:21.883282  975141 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0830 21:33:21.883289  975141 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0830 21:33:21.883297  975141 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:33:21.883301  975141 command_runner.go:130] > # pause_image_auth_file = ""
	I0830 21:33:21.883307  975141 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0830 21:33:21.883313  975141 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0830 21:33:21.883319  975141 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0830 21:33:21.883329  975141 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0830 21:33:21.883336  975141 command_runner.go:130] > # pause_command = "/pause"
	I0830 21:33:21.883342  975141 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0830 21:33:21.883350  975141 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0830 21:33:21.883357  975141 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0830 21:33:21.883365  975141 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0830 21:33:21.883370  975141 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0830 21:33:21.883376  975141 command_runner.go:130] > # signature_policy = ""
	I0830 21:33:21.883382  975141 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0830 21:33:21.883390  975141 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0830 21:33:21.883394  975141 command_runner.go:130] > # changing them here.
	I0830 21:33:21.883398  975141 command_runner.go:130] > # insecure_registries = [
	I0830 21:33:21.883401  975141 command_runner.go:130] > # ]
	I0830 21:33:21.883408  975141 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0830 21:33:21.883413  975141 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0830 21:33:21.883417  975141 command_runner.go:130] > # image_volumes = "mkdir"
	I0830 21:33:21.883424  975141 command_runner.go:130] > # Temporary directory to use for storing big files
	I0830 21:33:21.883429  975141 command_runner.go:130] > # big_files_temporary_dir = ""
	I0830 21:33:21.883437  975141 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0830 21:33:21.883441  975141 command_runner.go:130] > # CNI plugins.
	I0830 21:33:21.883445  975141 command_runner.go:130] > [crio.network]
	I0830 21:33:21.883451  975141 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0830 21:33:21.883458  975141 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0830 21:33:21.883462  975141 command_runner.go:130] > # cni_default_network = ""
	I0830 21:33:21.883468  975141 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0830 21:33:21.883474  975141 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0830 21:33:21.883479  975141 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0830 21:33:21.883487  975141 command_runner.go:130] > # plugin_dirs = [
	I0830 21:33:21.883493  975141 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0830 21:33:21.883502  975141 command_runner.go:130] > # ]
	I0830 21:33:21.883512  975141 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0830 21:33:21.883521  975141 command_runner.go:130] > [crio.metrics]
	I0830 21:33:21.883529  975141 command_runner.go:130] > # Globally enable or disable metrics support.
	I0830 21:33:21.883536  975141 command_runner.go:130] > enable_metrics = true
	I0830 21:33:21.883548  975141 command_runner.go:130] > # Specify enabled metrics collectors.
	I0830 21:33:21.883559  975141 command_runner.go:130] > # Per default all metrics are enabled.
	I0830 21:33:21.883569  975141 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0830 21:33:21.883577  975141 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0830 21:33:21.883582  975141 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0830 21:33:21.883589  975141 command_runner.go:130] > # metrics_collectors = [
	I0830 21:33:21.883593  975141 command_runner.go:130] > # 	"operations",
	I0830 21:33:21.883598  975141 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0830 21:33:21.883605  975141 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0830 21:33:21.883609  975141 command_runner.go:130] > # 	"operations_errors",
	I0830 21:33:21.883618  975141 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0830 21:33:21.883628  975141 command_runner.go:130] > # 	"image_pulls_by_name",
	I0830 21:33:21.883639  975141 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0830 21:33:21.883655  975141 command_runner.go:130] > # 	"image_pulls_failures",
	I0830 21:33:21.883663  975141 command_runner.go:130] > # 	"image_pulls_successes",
	I0830 21:33:21.883668  975141 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0830 21:33:21.883674  975141 command_runner.go:130] > # 	"image_layer_reuse",
	I0830 21:33:21.883678  975141 command_runner.go:130] > # 	"containers_oom_total",
	I0830 21:33:21.883685  975141 command_runner.go:130] > # 	"containers_oom",
	I0830 21:33:21.883689  975141 command_runner.go:130] > # 	"processes_defunct",
	I0830 21:33:21.883695  975141 command_runner.go:130] > # 	"operations_total",
	I0830 21:33:21.883700  975141 command_runner.go:130] > # 	"operations_latency_seconds",
	I0830 21:33:21.883712  975141 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0830 21:33:21.883723  975141 command_runner.go:130] > # 	"operations_errors_total",
	I0830 21:33:21.883730  975141 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0830 21:33:21.883742  975141 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0830 21:33:21.883753  975141 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0830 21:33:21.883760  975141 command_runner.go:130] > # 	"image_pulls_success_total",
	I0830 21:33:21.883784  975141 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0830 21:33:21.883793  975141 command_runner.go:130] > # 	"containers_oom_count_total",
	I0830 21:33:21.883802  975141 command_runner.go:130] > # ]
	I0830 21:33:21.883811  975141 command_runner.go:130] > # The port on which the metrics server will listen.
	I0830 21:33:21.883822  975141 command_runner.go:130] > # metrics_port = 9090
	I0830 21:33:21.883834  975141 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0830 21:33:21.883841  975141 command_runner.go:130] > # metrics_socket = ""
	I0830 21:33:21.883846  975141 command_runner.go:130] > # The certificate for the secure metrics server.
	I0830 21:33:21.883854  975141 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0830 21:33:21.883865  975141 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0830 21:33:21.883876  975141 command_runner.go:130] > # certificate on any modification event.
	I0830 21:33:21.883886  975141 command_runner.go:130] > # metrics_cert = ""
	I0830 21:33:21.883901  975141 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0830 21:33:21.883913  975141 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0830 21:33:21.883922  975141 command_runner.go:130] > # metrics_key = ""
	I0830 21:33:21.883935  975141 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0830 21:33:21.883941  975141 command_runner.go:130] > [crio.tracing]
	I0830 21:33:21.883951  975141 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0830 21:33:21.883961  975141 command_runner.go:130] > # enable_tracing = false
	I0830 21:33:21.883971  975141 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0830 21:33:21.883982  975141 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0830 21:33:21.883991  975141 command_runner.go:130] > # Number of samples to collect per million spans.
	I0830 21:33:21.884002  975141 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0830 21:33:21.884015  975141 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0830 21:33:21.884023  975141 command_runner.go:130] > [crio.stats]
	I0830 21:33:21.884053  975141 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0830 21:33:21.884066  975141 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0830 21:33:21.884077  975141 command_runner.go:130] > # stats_collection_period = 0
	I0830 21:33:21.884510  975141 command_runner.go:130] ! time="2023-08-30 21:33:21.856948552Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0830 21:33:21.884537  975141 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0830 21:33:21.884691  975141 cni.go:84] Creating CNI manager for ""
	I0830 21:33:21.884710  975141 cni.go:136] 2 nodes found, recommending kindnet
	I0830 21:33:21.884724  975141 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:33:21.884751  975141 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.46 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-752665 NodeName:multinode-752665-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 21:33:21.884883  975141 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-752665-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.46
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:33:21.884938  975141 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-752665-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 21:33:21.884993  975141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 21:33:21.894881  975141 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
	I0830 21:33:21.894921  975141 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
	
	Initiating transfer...
	I0830 21:33:21.894965  975141 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.1
	I0830 21:33:21.903697  975141 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/linux/amd64/v1.28.1/kubeadm
	I0830 21:33:21.903722  975141 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl.sha256
	I0830 21:33:21.903743  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/linux/amd64/v1.28.1/kubectl -> /var/lib/minikube/binaries/v1.28.1/kubectl
	I0830 21:33:21.903744  975141 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/linux/amd64/v1.28.1/kubelet
	I0830 21:33:21.903832  975141 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl
	I0830 21:33:21.907710  975141 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
	I0830 21:33:21.907819  975141 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
	I0830 21:33:21.907850  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/linux/amd64/v1.28.1/kubectl --> /var/lib/minikube/binaries/v1.28.1/kubectl (49864704 bytes)
	I0830 21:33:25.628793  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/linux/amd64/v1.28.1/kubeadm -> /var/lib/minikube/binaries/v1.28.1/kubeadm
	I0830 21:33:25.628883  975141 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm
	I0830 21:33:25.634003  975141 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
	I0830 21:33:25.634051  975141 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
	I0830 21:33:25.634077  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/linux/amd64/v1.28.1/kubeadm --> /var/lib/minikube/binaries/v1.28.1/kubeadm (50749440 bytes)
	I0830 21:33:29.050095  975141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:33:29.064604  975141 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/linux/amd64/v1.28.1/kubelet -> /var/lib/minikube/binaries/v1.28.1/kubelet
	I0830 21:33:29.064739  975141 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet
	I0830 21:33:29.069192  975141 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
	I0830 21:33:29.069245  975141 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
	I0830 21:33:29.069275  975141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/linux/amd64/v1.28.1/kubelet --> /var/lib/minikube/binaries/v1.28.1/kubelet (110764032 bytes)
	I0830 21:33:29.598743  975141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0830 21:33:29.607206  975141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0830 21:33:29.624389  975141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 21:33:29.641119  975141 ssh_runner.go:195] Run: grep 192.168.39.20	control-plane.minikube.internal$ /etc/hosts
	I0830 21:33:29.644863  975141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:33:29.659018  975141 host.go:66] Checking if "multinode-752665" exists ...
	I0830 21:33:29.659274  975141 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:33:29.659435  975141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:33:29.659485  975141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:33:29.674142  975141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0830 21:33:29.674590  975141 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:33:29.675083  975141 main.go:141] libmachine: Using API Version  1
	I0830 21:33:29.675113  975141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:33:29.675485  975141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:33:29.675710  975141 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:33:29.675875  975141 start.go:301] JoinCluster: &{Name:multinode-752665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:33:29.676009  975141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0830 21:33:29.676034  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:33:29.678741  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:33:29.679124  975141 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:33:29.679149  975141 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:33:29.679346  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:33:29.679537  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:33:29.679683  975141 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:33:29.679832  975141 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:33:29.849575  975141 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pvj7ws.2ste88k7pppcvd32 --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 21:33:29.849669  975141 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0830 21:33:29.849711  975141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pvj7ws.2ste88k7pppcvd32 --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-752665-m02"
	I0830 21:33:29.896895  975141 command_runner.go:130] ! W0830 21:33:29.888193     823 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0830 21:33:30.026915  975141 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 21:33:32.240815  975141 command_runner.go:130] > [preflight] Running pre-flight checks
	I0830 21:33:32.240850  975141 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0830 21:33:32.240865  975141 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0830 21:33:32.240877  975141 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 21:33:32.240889  975141 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 21:33:32.240896  975141 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0830 21:33:32.240906  975141 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0830 21:33:32.240914  975141 command_runner.go:130] > This node has joined the cluster:
	I0830 21:33:32.240924  975141 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0830 21:33:32.240934  975141 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0830 21:33:32.240948  975141 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0830 21:33:32.240975  975141 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pvj7ws.2ste88k7pppcvd32 --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-752665-m02": (2.391245433s)
	I0830 21:33:32.241006  975141 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0830 21:33:32.499633  975141 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0830 21:33:32.499876  975141 start.go:303] JoinCluster complete in 2.823999455s
	I0830 21:33:32.499901  975141 cni.go:84] Creating CNI manager for ""
	I0830 21:33:32.499909  975141 cni.go:136] 2 nodes found, recommending kindnet
	I0830 21:33:32.499999  975141 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 21:33:32.505310  975141 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0830 21:33:32.505331  975141 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0830 21:33:32.505348  975141 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0830 21:33:32.505357  975141 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 21:33:32.505366  975141 command_runner.go:130] > Access: 2023-08-30 21:32:00.901149145 +0000
	I0830 21:33:32.505376  975141 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0830 21:33:32.505385  975141 command_runner.go:130] > Change: 2023-08-30 21:31:59.074149145 +0000
	I0830 21:33:32.505395  975141 command_runner.go:130] >  Birth: -
	I0830 21:33:32.505690  975141 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 21:33:32.505708  975141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 21:33:32.524431  975141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 21:33:32.886551  975141 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0830 21:33:32.886596  975141 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0830 21:33:32.886605  975141 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0830 21:33:32.886613  975141 command_runner.go:130] > daemonset.apps/kindnet configured
	I0830 21:33:32.886969  975141 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:33:32.887182  975141 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:33:32.887556  975141 round_trippers.go:463] GET https://192.168.39.20:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 21:33:32.887571  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:32.887580  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:32.887586  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:32.889637  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:32.889657  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:32.889664  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:32.889669  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:32.889675  975141 round_trippers.go:580]     Content-Length: 291
	I0830 21:33:32.889680  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:32 GMT
	I0830 21:33:32.889686  975141 round_trippers.go:580]     Audit-Id: 89eccc31-a21d-42ef-a165-7b503798af83
	I0830 21:33:32.889691  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:32.889699  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:32.889726  975141 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4cda7228-5995-4a40-902e-7c8e87f8c72e","resourceVersion":"406","creationTimestamp":"2023-08-30T21:32:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0830 21:33:32.889815  975141 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-752665" context rescaled to 1 replicas
	I0830 21:33:32.889849  975141 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0830 21:33:32.891801  975141 out.go:177] * Verifying Kubernetes components...
	I0830 21:33:32.893267  975141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:33:32.906293  975141 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:33:32.906542  975141 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:33:32.906792  975141 node_ready.go:35] waiting up to 6m0s for node "multinode-752665-m02" to be "Ready" ...
	I0830 21:33:32.906852  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:32.906859  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:32.906867  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:32.906872  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:32.909205  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:32.909223  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:32.909235  975141 round_trippers.go:580]     Audit-Id: 446813cb-ed8a-4bbb-a7e4-0c4cffdf6532
	I0830 21:33:32.909245  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:32.909251  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:32.909257  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:32.909264  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:32.909275  975141 round_trippers.go:580]     Content-Length: 3530
	I0830 21:33:32.909285  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:32 GMT
	I0830 21:33:32.909404  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"455","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0830 21:33:32.909773  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:32.909788  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:32.909799  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:32.909817  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:32.911645  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:33:32.911670  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:32.911681  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:32 GMT
	I0830 21:33:32.911690  975141 round_trippers.go:580]     Audit-Id: 302c75d8-0168-4bc7-97f0-2b888b07a500
	I0830 21:33:32.911699  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:32.911706  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:32.911711  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:32.911720  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:32.911725  975141 round_trippers.go:580]     Content-Length: 3530
	I0830 21:33:32.911809  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"455","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0830 21:33:33.412547  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:33.412576  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:33.412595  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:33.412604  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:33.415372  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:33.415402  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:33.415414  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:33.415423  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:33.415432  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:33.415441  975141 round_trippers.go:580]     Content-Length: 3530
	I0830 21:33:33.415450  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:33 GMT
	I0830 21:33:33.415457  975141 round_trippers.go:580]     Audit-Id: bdc5610b-1b2e-4d22-a25e-c5bdc766a501
	I0830 21:33:33.415470  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:33.415594  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"455","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0830 21:33:33.913111  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:33.913136  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:33.913149  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:33.913157  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:33.916633  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:33:33.916654  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:33.916661  975141 round_trippers.go:580]     Content-Length: 3530
	I0830 21:33:33.916667  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:33 GMT
	I0830 21:33:33.916676  975141 round_trippers.go:580]     Audit-Id: e8a8ae6b-33a2-43a4-9e2f-db42178a1407
	I0830 21:33:33.916681  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:33.916687  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:33.916693  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:33.916701  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:33.916769  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"455","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0830 21:33:34.413034  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:34.413058  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:34.413067  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:34.413074  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:34.415955  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:34.415983  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:34.416000  975141 round_trippers.go:580]     Audit-Id: 9b290898-391d-4190-b21d-d4d26ce8965c
	I0830 21:33:34.416012  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:34.416020  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:34.416032  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:34.416044  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:34.416056  975141 round_trippers.go:580]     Content-Length: 3530
	I0830 21:33:34.416068  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:34 GMT
	I0830 21:33:34.416163  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"455","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0830 21:33:34.913294  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:34.913322  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:34.913334  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:34.913343  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:34.918063  975141 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:33:34.918098  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:34.918110  975141 round_trippers.go:580]     Audit-Id: 82e15502-2ce7-445e-9b9b-a062c7ded60f
	I0830 21:33:34.918120  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:34.918129  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:34.918137  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:34.918154  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:34.918164  975141 round_trippers.go:580]     Content-Length: 3530
	I0830 21:33:34.918169  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:34 GMT
	I0830 21:33:34.918266  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"455","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0830 21:33:34.918605  975141 node_ready.go:58] node "multinode-752665-m02" has status "Ready":"False"
	I0830 21:33:35.412460  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:35.412491  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:35.412504  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:35.412516  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:35.416099  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:33:35.416124  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:35.416134  975141 round_trippers.go:580]     Audit-Id: bdb5c3fd-cf12-4235-8c10-62b33f8ee668
	I0830 21:33:35.416143  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:35.416151  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:35.416160  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:35.416169  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:35.416180  975141 round_trippers.go:580]     Content-Length: 3530
	I0830 21:33:35.416186  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:35 GMT
	I0830 21:33:35.416340  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"455","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0830 21:33:35.913003  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:35.913027  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:35.913035  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:35.913041  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:35.916914  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:33:35.916943  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:35.916954  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:35.916964  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:35.916973  975141 round_trippers.go:580]     Content-Length: 3530
	I0830 21:33:35.916988  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:35 GMT
	I0830 21:33:35.916997  975141 round_trippers.go:580]     Audit-Id: 23a4c9c6-a16a-4a34-98e9-4b90f8688651
	I0830 21:33:35.917002  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:35.917008  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:35.917104  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"455","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0830 21:33:36.412398  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:36.412420  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:36.412430  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:36.412436  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:36.415402  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:36.415429  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:36.415439  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:36.415446  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:36.415454  975141 round_trippers.go:580]     Content-Length: 3530
	I0830 21:33:36.415463  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:36 GMT
	I0830 21:33:36.415482  975141 round_trippers.go:580]     Audit-Id: 53dd8880-a6bd-47d1-82d6-d49f4b68f6bb
	I0830 21:33:36.415491  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:36.415500  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:36.415546  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"455","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0830 21:33:36.913310  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:36.913341  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:36.913353  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:36.913362  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:36.917465  975141 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:33:36.917491  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:36.917502  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:36.917510  975141 round_trippers.go:580]     Content-Length: 3639
	I0830 21:33:36.917518  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:36 GMT
	I0830 21:33:36.917527  975141 round_trippers.go:580]     Audit-Id: 564f427b-df7c-4d94-8576-129991b5acae
	I0830 21:33:36.917537  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:36.917549  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:36.917561  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:36.918015  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"475","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0830 21:33:37.412856  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:37.412890  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:37.412902  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:37.412912  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:37.416302  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:33:37.416333  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:37.416346  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:37.416357  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:37.416367  975141 round_trippers.go:580]     Content-Length: 3639
	I0830 21:33:37.416380  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:37 GMT
	I0830 21:33:37.416395  975141 round_trippers.go:580]     Audit-Id: fc4719ff-59e7-45db-b77c-981af7cac4c5
	I0830 21:33:37.416405  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:37.416418  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:37.416533  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"475","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0830 21:33:37.416857  975141 node_ready.go:58] node "multinode-752665-m02" has status "Ready":"False"
	I0830 21:33:37.913030  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:37.913054  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:37.913062  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:37.913068  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:37.915868  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:37.915891  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:37.915899  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:37 GMT
	I0830 21:33:37.915905  975141 round_trippers.go:580]     Audit-Id: 083a194a-1ea2-408b-aa2b-a0da738ac6e0
	I0830 21:33:37.915912  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:37.915923  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:37.915931  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:37.915937  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:37.915944  975141 round_trippers.go:580]     Content-Length: 3639
	I0830 21:33:37.916043  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"475","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0830 21:33:38.412216  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:38.412238  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:38.412247  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:38.412253  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:38.415042  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:38.415061  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:38.415068  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:38.415074  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:38.415079  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:38.415090  975141 round_trippers.go:580]     Content-Length: 3639
	I0830 21:33:38.415096  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:38 GMT
	I0830 21:33:38.415101  975141 round_trippers.go:580]     Audit-Id: fbb521e6-7f95-4b22-a360-1bf38c2b5cb3
	I0830 21:33:38.415109  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:38.415178  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"475","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0830 21:33:38.912782  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:38.912813  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:38.912824  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:38.912832  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:38.918258  975141 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0830 21:33:38.918289  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:38.918301  975141 round_trippers.go:580]     Audit-Id: a11ba457-98bb-4ea0-971b-a945732acb24
	I0830 21:33:38.918316  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:38.918325  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:38.918338  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:38.918347  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:38.918364  975141 round_trippers.go:580]     Content-Length: 3639
	I0830 21:33:38.918373  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:38 GMT
	I0830 21:33:38.918586  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"475","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0830 21:33:39.413007  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:39.413030  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:39.413040  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:39.413046  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:39.416285  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:33:39.416307  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:39.416316  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:39.416324  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:39.416333  975141 round_trippers.go:580]     Content-Length: 3639
	I0830 21:33:39.416344  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:39 GMT
	I0830 21:33:39.416357  975141 round_trippers.go:580]     Audit-Id: 31cf3761-3e40-4706-a3c3-bc8eb2fae6b4
	I0830 21:33:39.416370  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:39.416383  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:39.416451  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"475","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0830 21:33:39.912648  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:39.912698  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:39.912710  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:39.912717  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:39.915242  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:39.915266  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:39.915276  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:39 GMT
	I0830 21:33:39.915285  975141 round_trippers.go:580]     Audit-Id: 1a0526c8-86e8-4538-9ac7-5f45d14bc721
	I0830 21:33:39.915296  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:39.915306  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:39.915320  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:39.915330  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:39.915350  975141 round_trippers.go:580]     Content-Length: 3639
	I0830 21:33:39.915430  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"475","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0830 21:33:39.915692  975141 node_ready.go:58] node "multinode-752665-m02" has status "Ready":"False"
	I0830 21:33:40.413005  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:40.413034  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.413043  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.413051  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.415603  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:40.415625  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.415634  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.415640  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.415645  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.415651  975141 round_trippers.go:580]     Content-Length: 3725
	I0830 21:33:40.415656  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.415667  975141 round_trippers.go:580]     Audit-Id: 2b4201fe-b3ea-410a-8e19-e9815b6829ea
	I0830 21:33:40.415673  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.415736  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"487","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I0830 21:33:40.415993  975141 node_ready.go:49] node "multinode-752665-m02" has status "Ready":"True"
	I0830 21:33:40.416003  975141 node_ready.go:38] duration metric: took 7.50919888s waiting for node "multinode-752665-m02" to be "Ready" ...
	I0830 21:33:40.416012  975141 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:33:40.416061  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:33:40.416065  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.416072  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.416078  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.419435  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:33:40.419454  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.419461  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.419466  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.419475  975141 round_trippers.go:580]     Audit-Id: 2042d5b0-fec5-400d-869b-cbd575b0f98c
	I0830 21:33:40.419484  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.419491  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.419499  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.421108  975141 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"488"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"402","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67326 chars]
	I0830 21:33:40.423146  975141 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:40.423212  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:33:40.423220  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.423228  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.423234  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.425349  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:40.425367  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.425377  975141 round_trippers.go:580]     Audit-Id: 2618d8f1-de40-4131-95c6-fe0cc0dc9dc3
	I0830 21:33:40.425385  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.425392  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.425400  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.425409  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.425418  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.425707  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"402","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0830 21:33:40.426068  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:33:40.426078  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.426085  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.426090  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.427994  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:33:40.428006  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.428012  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.428017  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.428023  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.428028  975141 round_trippers.go:580]     Audit-Id: 3ef72efc-d476-4567-9b74-d34fae2dbaf0
	I0830 21:33:40.428033  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.428038  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.428170  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:33:40.428420  975141 pod_ready.go:92] pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace has status "Ready":"True"
	I0830 21:33:40.428431  975141 pod_ready.go:81] duration metric: took 5.267868ms waiting for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:40.428438  975141 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:40.428474  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-752665
	I0830 21:33:40.428481  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.428488  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.428494  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.430268  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:33:40.430278  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.430284  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.430290  975141 round_trippers.go:580]     Audit-Id: 4df96415-63f5-4094-b0f0-078c1e4b8dbb
	I0830 21:33:40.430296  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.430301  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.430306  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.430311  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.430475  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-752665","namespace":"kube-system","uid":"25e2609d-f391-4e71-823a-c4fe8625092d","resourceVersion":"407","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.20:2379","kubernetes.io/config.hash":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.mirror":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.seen":"2023-08-30T21:32:35.235892298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0830 21:33:40.430904  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:33:40.430922  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.430932  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.430939  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.432688  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:33:40.432699  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.432705  975141 round_trippers.go:580]     Audit-Id: 5c46a5fd-7a5f-48d0-ba14-2f7f4446e25d
	I0830 21:33:40.432710  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.432717  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.432725  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.432733  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.432743  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.432924  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:33:40.433167  975141 pod_ready.go:92] pod "etcd-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:33:40.433179  975141 pod_ready.go:81] duration metric: took 4.734821ms waiting for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:40.433190  975141 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:40.433226  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-752665
	I0830 21:33:40.433234  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.433240  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.433246  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.435146  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:33:40.435170  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.435181  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.435190  975141 round_trippers.go:580]     Audit-Id: cc6fcbbc-5d6a-4ef0-85ee-020abd3d4294
	I0830 21:33:40.435199  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.435212  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.435222  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.435236  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.435329  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-752665","namespace":"kube-system","uid":"d813d11d-d0ec-4091-a72b-187bd44eabe3","resourceVersion":"408","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.20:8443","kubernetes.io/config.hash":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.mirror":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.seen":"2023-08-30T21:32:26.214498990Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0830 21:33:40.435763  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:33:40.435812  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.435824  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.435835  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.437532  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:33:40.437549  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.437558  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.437567  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.437575  975141 round_trippers.go:580]     Audit-Id: 8660dc3c-426c-4a87-916b-a8e284b4baea
	I0830 21:33:40.437587  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.437605  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.437618  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.437786  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:33:40.438018  975141 pod_ready.go:92] pod "kube-apiserver-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:33:40.438029  975141 pod_ready.go:81] duration metric: took 4.832973ms waiting for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:40.438035  975141 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:40.438074  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-752665
	I0830 21:33:40.438081  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.438087  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.438095  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.439762  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:33:40.439791  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.439801  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.439807  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.439813  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.439819  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.439826  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.439835  975141 round_trippers.go:580]     Audit-Id: 1aff69eb-f2a9-4d3f-bbd1-b33220e5d8be
	I0830 21:33:40.439996  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-752665","namespace":"kube-system","uid":"0391b35f-5177-412c-b7d4-073efb2de36b","resourceVersion":"409","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.mirror":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.seen":"2023-08-30T21:32:26.214500244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0830 21:33:40.440280  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:33:40.440288  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.440295  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.440300  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.442055  975141 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:33:40.442073  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.442082  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.442088  975141 round_trippers.go:580]     Audit-Id: 542b5c39-add7-48f5-bff2-7eeca2e085dc
	I0830 21:33:40.442093  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.442099  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.442107  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.442112  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.442311  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:33:40.442547  975141 pod_ready.go:92] pod "kube-controller-manager-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:33:40.442557  975141 pod_ready.go:81] duration metric: took 4.514196ms waiting for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:40.442564  975141 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5twl5" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:40.613959  975141 request.go:629] Waited for 171.323041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:33:40.614022  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:33:40.614026  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.614034  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.614041  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.618193  975141 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:33:40.618211  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.618218  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.618231  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.618240  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.618248  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.618256  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.618264  975141 round_trippers.go:580]     Audit-Id: 19d4ba77-8536-491b-b0e9-0c6b9bc154c5
	I0830 21:33:40.618726  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5twl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff4250a4-1482-42c0-a523-e97faf806c43","resourceVersion":"477","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0830 21:33:40.813494  975141 request.go:629] Waited for 194.341488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:40.813554  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:33:40.813558  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:40.813566  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:40.813572  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:40.816171  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:40.816187  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:40.816194  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:40.816202  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:40.816211  975141 round_trippers.go:580]     Content-Length: 3725
	I0830 21:33:40.816220  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:40 GMT
	I0830 21:33:40.816229  975141 round_trippers.go:580]     Audit-Id: 02dda2f1-8074-4362-a9d6-cce2ce99c40e
	I0830 21:33:40.816238  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:40.816249  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:40.816315  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"487","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I0830 21:33:40.816544  975141 pod_ready.go:92] pod "kube-proxy-5twl5" in "kube-system" namespace has status "Ready":"True"
	I0830 21:33:40.816556  975141 pod_ready.go:81] duration metric: took 373.98661ms waiting for pod "kube-proxy-5twl5" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:40.816566  975141 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:41.013980  975141 request.go:629] Waited for 197.339928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:33:41.014048  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:33:41.014053  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:41.014060  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:41.014067  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:41.016476  975141 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:33:41.016498  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:41.016508  975141 round_trippers.go:580]     Audit-Id: e6a95cd8-d025-41ce-935f-06f2c658225a
	I0830 21:33:41.016516  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:41.016523  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:41.016528  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:41.016534  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:41.016539  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:41 GMT
	I0830 21:33:41.016931  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vltx5","generateName":"kube-proxy-","namespace":"kube-system","uid":"24ee271e-5778-4d0c-ab2c-77426f2673b3","resourceVersion":"375","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0830 21:33:41.213794  975141 request.go:629] Waited for 196.34734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:33:41.213873  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:33:41.213877  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:41.213886  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:41.213895  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:41.218617  975141 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:33:41.218656  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:41.218666  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:41 GMT
	I0830 21:33:41.218675  975141 round_trippers.go:580]     Audit-Id: 60ff1faa-e94c-4dac-85a0-c44ce979e492
	I0830 21:33:41.218683  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:41.218691  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:41.218698  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:41.218705  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:41.218948  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:33:41.219311  975141 pod_ready.go:92] pod "kube-proxy-vltx5" in "kube-system" namespace has status "Ready":"True"
	I0830 21:33:41.219326  975141 pod_ready.go:81] duration metric: took 402.754368ms waiting for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:41.219336  975141 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:41.413776  975141 request.go:629] Waited for 194.358112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:33:41.413838  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:33:41.413842  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:41.413850  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:41.413862  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:41.417012  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:33:41.417030  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:41.417037  975141 round_trippers.go:580]     Audit-Id: caae16b0-e21b-4d73-9f10-ad78fe2cf1aa
	I0830 21:33:41.417042  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:41.417047  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:41.417052  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:41.417058  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:41.417063  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:41 GMT
	I0830 21:33:41.417256  975141 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-752665","namespace":"kube-system","uid":"4c8a6a98-51b6-4010-9519-add75ab1a7a9","resourceVersion":"353","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.mirror":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.seen":"2023-08-30T21:32:35.235897289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0830 21:33:41.614071  975141 request.go:629] Waited for 196.387924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:33:41.614153  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:33:41.614160  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:41.614172  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:41.614180  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:41.618232  975141 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:33:41.618251  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:41.618259  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:41.618264  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:41.618270  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:41.618275  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:41 GMT
	I0830 21:33:41.618281  975141 round_trippers.go:580]     Audit-Id: 7c2034b6-b5a8-4463-946b-8f3446e632df
	I0830 21:33:41.618286  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:41.619100  975141 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0830 21:33:41.619418  975141 pod_ready.go:92] pod "kube-scheduler-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:33:41.619432  975141 pod_ready.go:81] duration metric: took 400.089812ms waiting for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:33:41.619443  975141 pod_ready.go:38] duration metric: took 1.203422683s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:33:41.619455  975141 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 21:33:41.619508  975141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:33:41.632042  975141 system_svc.go:56] duration metric: took 12.577999ms WaitForService to wait for kubelet.
	I0830 21:33:41.632068  975141 kubeadm.go:581] duration metric: took 8.742188502s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 21:33:41.632088  975141 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:33:41.813487  975141 request.go:629] Waited for 181.321431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes
	I0830 21:33:41.813566  975141 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes
	I0830 21:33:41.813571  975141 round_trippers.go:469] Request Headers:
	I0830 21:33:41.813579  975141 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:33:41.813589  975141 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:33:41.817393  975141 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:33:41.817418  975141 round_trippers.go:577] Response Headers:
	I0830 21:33:41.817428  975141 round_trippers.go:580]     Audit-Id: e7aed4be-71c4-4c1d-9842-5ef86fce34e4
	I0830 21:33:41.817437  975141 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:33:41.817453  975141 round_trippers.go:580]     Content-Type: application/json
	I0830 21:33:41.817462  975141 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:33:41.817471  975141 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:33:41.817477  975141 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:33:41 GMT
	I0830 21:33:41.817954  975141 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"385","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9524 chars]
	I0830 21:33:41.818583  975141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:33:41.818612  975141 node_conditions.go:123] node cpu capacity is 2
	I0830 21:33:41.818632  975141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:33:41.818646  975141 node_conditions.go:123] node cpu capacity is 2
	I0830 21:33:41.818656  975141 node_conditions.go:105] duration metric: took 186.563173ms to run NodePressure ...
	I0830 21:33:41.818674  975141 start.go:228] waiting for startup goroutines ...
	I0830 21:33:41.818708  975141 start.go:242] writing updated cluster config ...
	I0830 21:33:41.819100  975141 ssh_runner.go:195] Run: rm -f paused
	I0830 21:33:41.869177  975141 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 21:33:41.871723  975141 out.go:177] * Done! kubectl is now configured to use "multinode-752665" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 21:31:59 UTC, ends at Wed 2023-08-30 21:33:49 UTC. --
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.616941340Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e7c86423034b2f5c6c8c396f2fa6c43b284e89ef767633fc925377ec4f12089a,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-mzmpx,Uid:1fd37765-b8e2-4e0c-8e64-71d975f27bf8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431223004461734,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T21:33:42.670637291Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9101a82bd5053f513c590ad7b709320a0cab44b67070ccb379652c264120948b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:67db5a8a-290a-40a7-b42e-212d99db812a,Namespace:kube-system,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1693431173125226346,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/
tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-08-30T21:32:52.788409475Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:593717c725d3bbd668185c574563c8b245ff4e4f4bbd49730af5617c32017cb7,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-zcppg,Uid:4742270b-6c64-411b-bfb6-8c53211aa106,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431173102686851,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T21:32:52.769146018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:81dd8ceaa0372e7939c68d09becf3725f2ce01b7917d02b21da9a4bea482a14b,Metadata:&PodSandboxMetadata{Name:kube-proxy-vltx5,Uid:24ee271e-5778-4d0c-ab2c-77426f2673b3,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1693431168284799343,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f2673b3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T21:32:47.631661507Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c51d8687a078385eb0afff74eff6c05d6a52cb4dd587e389775f3281baeb7c3b,Metadata:&PodSandboxMetadata{Name:kindnet-x5kk4,Uid:2fdd77f6-856a-4400-b881-210549c588e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431168260941675,Labels:map[string]string{app: kindnet,controller-revision-hash: 77b9cf4878,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fdd77f6-856a-4400-b881-210549c588e2,k8s-app: kindnet,pod-template-gener
ation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T21:32:47.623096257Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e65a6c5d7f0de55beca87b361b1040f27680f794d1d2e07347552635d8c03c7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-752665,Uid:2957dd3360cebd27e85f1db4b73fa253,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431146765680477,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2957dd3360cebd27e85f1db4b73fa253,kubernetes.io/config.seen: 2023-08-30T21:32:26.214501004Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a7299cd2445ad97867ad089abb259159795e8f606fc29cb45f4e48ff4ff904e8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-mul
tinode-752665,Uid:063d73d4de1cf2feb4ba920354d72513,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431146750703080,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72513,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.20:8443,kubernetes.io/config.hash: 063d73d4de1cf2feb4ba920354d72513,kubernetes.io/config.seen: 2023-08-30T21:32:26.214498990Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2f52725abb233d3c6c447f45fca3dfec0524b84dcdc479a67831a357db1c525f,Metadata:&PodSandboxMetadata{Name:etcd-multinode-752665,Uid:3d44ed339e19dd41d07034008e5b52b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431146725056965,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernet
es.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.20:2379,kubernetes.io/config.hash: 3d44ed339e19dd41d07034008e5b52b3,kubernetes.io/config.seen: 2023-08-30T21:32:26.214495184Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a33ecb12d16534d506428791e9973bbb6052bd33933534592c3bbaf9c0d2e63b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-752665,Uid:c398e6beaac5b42fe6a53cb0b1863506,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431146719886059,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe6a53cb0b1863506,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: c398e6beaac5b42fe6a53cb0b1863506,kubernetes.io/config.seen: 2023-08-30T21:32:26.214500244Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=6be2a5dd-957b-4a91-aa2b-8903b2a17d46 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.617872551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b80bb26-f62a-4f94-9eb9-2af2d681e36a name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.617988485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b80bb26-f62a-4f94-9eb9-2af2d681e36a name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.618265404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf933f30c082dd147a28e3fc4d927692a8673a386cb7bd514bd94c1a01d46ff6,PodSandboxId:e7c86423034b2f5c6c8c396f2fa6c43b284e89ef767633fc925377ec4f12089a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431224580130578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:241bf313f387341bce2d683364fb95d1bee1d71e69b2dea6afb549f9b8ae753a,PodSandboxId:593717c725d3bbd668185c574563c8b245ff4e4f4bbd49730af5617c32017cb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431173834226646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea33d3c8c4848dce373619fceaf6342557a81550bca322ce4b2ef864118fd610,PodSandboxId:9101a82bd5053f513c590ad7b709320a0cab44b67070ccb379652c264120948b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431173585682117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad2ab7af30360a5ff255c037c66aeba977a33c1605e47bf0dd4c94569804b03,PodSandboxId:c51d8687a078385eb0afff74eff6c05d6a52cb4dd587e389775f3281baeb7c3b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431171175685304,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d374a64c3ec60e1b7d28ec245fbcd943ec9c06b24bb3f65f1780f676a3e8dbf,PodSandboxId:81dd8ceaa0372e7939c68d09becf3725f2ce01b7917d02b21da9a4bea482a14b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431168858451488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f
2673b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9232fae9291cc951aac2f1cc31660c6a539eb0019943c57f967ea48cccdc0fa0,PodSandboxId:2f52725abb233d3c6c447f45fca3dfec0524b84dcdc479a67831a357db1c525f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431147867614672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.
container.hash: 831a3116,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da19ce3a7414be7fd564681108a860e8b42804cba15bd2c977d68104db02c9ef,PodSandboxId:0e65a6c5d7f0de55beca87b361b1040f27680f794d1d2e07347552635d8c03c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431147575935575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6,PodSandboxId:a7299cd2445ad97867ad089abb259159795e8f606fc29cb45f4e48ff4ff904e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431147415834465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0ac2b9aa60995bef25d7bf6ef3a8cf87b2b04e0c399e062b27d03d7fc6ea27,PodSandboxId:a33ecb12d16534d506428791e9973bbb6052bd33933534592c3bbaf9c0d2e63b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431147150857370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b80bb26-f62a-4f94-9eb9-2af2d681e36a name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.825717016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1cb27181-2ae6-48f9-9a49-8160518fcc95 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.825782697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1cb27181-2ae6-48f9-9a49-8160518fcc95 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.826019688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf933f30c082dd147a28e3fc4d927692a8673a386cb7bd514bd94c1a01d46ff6,PodSandboxId:e7c86423034b2f5c6c8c396f2fa6c43b284e89ef767633fc925377ec4f12089a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431224580130578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:241bf313f387341bce2d683364fb95d1bee1d71e69b2dea6afb549f9b8ae753a,PodSandboxId:593717c725d3bbd668185c574563c8b245ff4e4f4bbd49730af5617c32017cb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431173834226646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea33d3c8c4848dce373619fceaf6342557a81550bca322ce4b2ef864118fd610,PodSandboxId:9101a82bd5053f513c590ad7b709320a0cab44b67070ccb379652c264120948b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431173585682117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad2ab7af30360a5ff255c037c66aeba977a33c1605e47bf0dd4c94569804b03,PodSandboxId:c51d8687a078385eb0afff74eff6c05d6a52cb4dd587e389775f3281baeb7c3b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431171175685304,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d374a64c3ec60e1b7d28ec245fbcd943ec9c06b24bb3f65f1780f676a3e8dbf,PodSandboxId:81dd8ceaa0372e7939c68d09becf3725f2ce01b7917d02b21da9a4bea482a14b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431168858451488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f
2673b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9232fae9291cc951aac2f1cc31660c6a539eb0019943c57f967ea48cccdc0fa0,PodSandboxId:2f52725abb233d3c6c447f45fca3dfec0524b84dcdc479a67831a357db1c525f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431147867614672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.
container.hash: 831a3116,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da19ce3a7414be7fd564681108a860e8b42804cba15bd2c977d68104db02c9ef,PodSandboxId:0e65a6c5d7f0de55beca87b361b1040f27680f794d1d2e07347552635d8c03c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431147575935575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6,PodSandboxId:a7299cd2445ad97867ad089abb259159795e8f606fc29cb45f4e48ff4ff904e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431147415834465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0ac2b9aa60995bef25d7bf6ef3a8cf87b2b04e0c399e062b27d03d7fc6ea27,PodSandboxId:a33ecb12d16534d506428791e9973bbb6052bd33933534592c3bbaf9c0d2e63b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431147150857370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1cb27181-2ae6-48f9-9a49-8160518fcc95 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.862344323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=13ee088b-f4f4-4817-af6f-cd90ed97a2e7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.862410242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=13ee088b-f4f4-4817-af6f-cd90ed97a2e7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.862613468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf933f30c082dd147a28e3fc4d927692a8673a386cb7bd514bd94c1a01d46ff6,PodSandboxId:e7c86423034b2f5c6c8c396f2fa6c43b284e89ef767633fc925377ec4f12089a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431224580130578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:241bf313f387341bce2d683364fb95d1bee1d71e69b2dea6afb549f9b8ae753a,PodSandboxId:593717c725d3bbd668185c574563c8b245ff4e4f4bbd49730af5617c32017cb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431173834226646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea33d3c8c4848dce373619fceaf6342557a81550bca322ce4b2ef864118fd610,PodSandboxId:9101a82bd5053f513c590ad7b709320a0cab44b67070ccb379652c264120948b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431173585682117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad2ab7af30360a5ff255c037c66aeba977a33c1605e47bf0dd4c94569804b03,PodSandboxId:c51d8687a078385eb0afff74eff6c05d6a52cb4dd587e389775f3281baeb7c3b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431171175685304,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d374a64c3ec60e1b7d28ec245fbcd943ec9c06b24bb3f65f1780f676a3e8dbf,PodSandboxId:81dd8ceaa0372e7939c68d09becf3725f2ce01b7917d02b21da9a4bea482a14b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431168858451488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f
2673b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9232fae9291cc951aac2f1cc31660c6a539eb0019943c57f967ea48cccdc0fa0,PodSandboxId:2f52725abb233d3c6c447f45fca3dfec0524b84dcdc479a67831a357db1c525f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431147867614672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.
container.hash: 831a3116,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da19ce3a7414be7fd564681108a860e8b42804cba15bd2c977d68104db02c9ef,PodSandboxId:0e65a6c5d7f0de55beca87b361b1040f27680f794d1d2e07347552635d8c03c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431147575935575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6,PodSandboxId:a7299cd2445ad97867ad089abb259159795e8f606fc29cb45f4e48ff4ff904e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431147415834465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0ac2b9aa60995bef25d7bf6ef3a8cf87b2b04e0c399e062b27d03d7fc6ea27,PodSandboxId:a33ecb12d16534d506428791e9973bbb6052bd33933534592c3bbaf9c0d2e63b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431147150857370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=13ee088b-f4f4-4817-af6f-cd90ed97a2e7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.895617172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b9ced385-75a8-417d-bb8e-77bb9e54abf3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.895680787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b9ced385-75a8-417d-bb8e-77bb9e54abf3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.895878794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf933f30c082dd147a28e3fc4d927692a8673a386cb7bd514bd94c1a01d46ff6,PodSandboxId:e7c86423034b2f5c6c8c396f2fa6c43b284e89ef767633fc925377ec4f12089a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431224580130578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:241bf313f387341bce2d683364fb95d1bee1d71e69b2dea6afb549f9b8ae753a,PodSandboxId:593717c725d3bbd668185c574563c8b245ff4e4f4bbd49730af5617c32017cb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431173834226646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea33d3c8c4848dce373619fceaf6342557a81550bca322ce4b2ef864118fd610,PodSandboxId:9101a82bd5053f513c590ad7b709320a0cab44b67070ccb379652c264120948b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431173585682117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad2ab7af30360a5ff255c037c66aeba977a33c1605e47bf0dd4c94569804b03,PodSandboxId:c51d8687a078385eb0afff74eff6c05d6a52cb4dd587e389775f3281baeb7c3b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431171175685304,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d374a64c3ec60e1b7d28ec245fbcd943ec9c06b24bb3f65f1780f676a3e8dbf,PodSandboxId:81dd8ceaa0372e7939c68d09becf3725f2ce01b7917d02b21da9a4bea482a14b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431168858451488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f
2673b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9232fae9291cc951aac2f1cc31660c6a539eb0019943c57f967ea48cccdc0fa0,PodSandboxId:2f52725abb233d3c6c447f45fca3dfec0524b84dcdc479a67831a357db1c525f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431147867614672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.
container.hash: 831a3116,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da19ce3a7414be7fd564681108a860e8b42804cba15bd2c977d68104db02c9ef,PodSandboxId:0e65a6c5d7f0de55beca87b361b1040f27680f794d1d2e07347552635d8c03c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431147575935575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6,PodSandboxId:a7299cd2445ad97867ad089abb259159795e8f606fc29cb45f4e48ff4ff904e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431147415834465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0ac2b9aa60995bef25d7bf6ef3a8cf87b2b04e0c399e062b27d03d7fc6ea27,PodSandboxId:a33ecb12d16534d506428791e9973bbb6052bd33933534592c3bbaf9c0d2e63b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431147150857370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b9ced385-75a8-417d-bb8e-77bb9e54abf3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.926936756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0b1047ae-de65-4441-aeb8-ac3df54013f5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.927003509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0b1047ae-de65-4441-aeb8-ac3df54013f5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.927274984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf933f30c082dd147a28e3fc4d927692a8673a386cb7bd514bd94c1a01d46ff6,PodSandboxId:e7c86423034b2f5c6c8c396f2fa6c43b284e89ef767633fc925377ec4f12089a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431224580130578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:241bf313f387341bce2d683364fb95d1bee1d71e69b2dea6afb549f9b8ae753a,PodSandboxId:593717c725d3bbd668185c574563c8b245ff4e4f4bbd49730af5617c32017cb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431173834226646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea33d3c8c4848dce373619fceaf6342557a81550bca322ce4b2ef864118fd610,PodSandboxId:9101a82bd5053f513c590ad7b709320a0cab44b67070ccb379652c264120948b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431173585682117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad2ab7af30360a5ff255c037c66aeba977a33c1605e47bf0dd4c94569804b03,PodSandboxId:c51d8687a078385eb0afff74eff6c05d6a52cb4dd587e389775f3281baeb7c3b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431171175685304,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d374a64c3ec60e1b7d28ec245fbcd943ec9c06b24bb3f65f1780f676a3e8dbf,PodSandboxId:81dd8ceaa0372e7939c68d09becf3725f2ce01b7917d02b21da9a4bea482a14b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431168858451488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f
2673b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9232fae9291cc951aac2f1cc31660c6a539eb0019943c57f967ea48cccdc0fa0,PodSandboxId:2f52725abb233d3c6c447f45fca3dfec0524b84dcdc479a67831a357db1c525f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431147867614672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.
container.hash: 831a3116,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da19ce3a7414be7fd564681108a860e8b42804cba15bd2c977d68104db02c9ef,PodSandboxId:0e65a6c5d7f0de55beca87b361b1040f27680f794d1d2e07347552635d8c03c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431147575935575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6,PodSandboxId:a7299cd2445ad97867ad089abb259159795e8f606fc29cb45f4e48ff4ff904e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431147415834465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0ac2b9aa60995bef25d7bf6ef3a8cf87b2b04e0c399e062b27d03d7fc6ea27,PodSandboxId:a33ecb12d16534d506428791e9973bbb6052bd33933534592c3bbaf9c0d2e63b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431147150857370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0b1047ae-de65-4441-aeb8-ac3df54013f5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.960735568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cf4a2269-e70a-41c2-bb07-0b0dcc9467cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.960800851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cf4a2269-e70a-41c2-bb07-0b0dcc9467cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.961017722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf933f30c082dd147a28e3fc4d927692a8673a386cb7bd514bd94c1a01d46ff6,PodSandboxId:e7c86423034b2f5c6c8c396f2fa6c43b284e89ef767633fc925377ec4f12089a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431224580130578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:241bf313f387341bce2d683364fb95d1bee1d71e69b2dea6afb549f9b8ae753a,PodSandboxId:593717c725d3bbd668185c574563c8b245ff4e4f4bbd49730af5617c32017cb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431173834226646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea33d3c8c4848dce373619fceaf6342557a81550bca322ce4b2ef864118fd610,PodSandboxId:9101a82bd5053f513c590ad7b709320a0cab44b67070ccb379652c264120948b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431173585682117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad2ab7af30360a5ff255c037c66aeba977a33c1605e47bf0dd4c94569804b03,PodSandboxId:c51d8687a078385eb0afff74eff6c05d6a52cb4dd587e389775f3281baeb7c3b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431171175685304,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d374a64c3ec60e1b7d28ec245fbcd943ec9c06b24bb3f65f1780f676a3e8dbf,PodSandboxId:81dd8ceaa0372e7939c68d09becf3725f2ce01b7917d02b21da9a4bea482a14b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431168858451488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f
2673b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9232fae9291cc951aac2f1cc31660c6a539eb0019943c57f967ea48cccdc0fa0,PodSandboxId:2f52725abb233d3c6c447f45fca3dfec0524b84dcdc479a67831a357db1c525f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431147867614672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.
container.hash: 831a3116,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da19ce3a7414be7fd564681108a860e8b42804cba15bd2c977d68104db02c9ef,PodSandboxId:0e65a6c5d7f0de55beca87b361b1040f27680f794d1d2e07347552635d8c03c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431147575935575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6,PodSandboxId:a7299cd2445ad97867ad089abb259159795e8f606fc29cb45f4e48ff4ff904e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431147415834465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0ac2b9aa60995bef25d7bf6ef3a8cf87b2b04e0c399e062b27d03d7fc6ea27,PodSandboxId:a33ecb12d16534d506428791e9973bbb6052bd33933534592c3bbaf9c0d2e63b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431147150857370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cf4a2269-e70a-41c2-bb07-0b0dcc9467cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.998251073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=19fa48b1-606c-4b9b-b345-a87c6a74ff89 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.998348760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=19fa48b1-606c-4b9b-b345-a87c6a74ff89 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:48 multinode-752665 crio[718]: time="2023-08-30 21:33:48.998550407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf933f30c082dd147a28e3fc4d927692a8673a386cb7bd514bd94c1a01d46ff6,PodSandboxId:e7c86423034b2f5c6c8c396f2fa6c43b284e89ef767633fc925377ec4f12089a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431224580130578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:241bf313f387341bce2d683364fb95d1bee1d71e69b2dea6afb549f9b8ae753a,PodSandboxId:593717c725d3bbd668185c574563c8b245ff4e4f4bbd49730af5617c32017cb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431173834226646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea33d3c8c4848dce373619fceaf6342557a81550bca322ce4b2ef864118fd610,PodSandboxId:9101a82bd5053f513c590ad7b709320a0cab44b67070ccb379652c264120948b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431173585682117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad2ab7af30360a5ff255c037c66aeba977a33c1605e47bf0dd4c94569804b03,PodSandboxId:c51d8687a078385eb0afff74eff6c05d6a52cb4dd587e389775f3281baeb7c3b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431171175685304,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d374a64c3ec60e1b7d28ec245fbcd943ec9c06b24bb3f65f1780f676a3e8dbf,PodSandboxId:81dd8ceaa0372e7939c68d09becf3725f2ce01b7917d02b21da9a4bea482a14b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431168858451488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f
2673b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9232fae9291cc951aac2f1cc31660c6a539eb0019943c57f967ea48cccdc0fa0,PodSandboxId:2f52725abb233d3c6c447f45fca3dfec0524b84dcdc479a67831a357db1c525f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431147867614672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.
container.hash: 831a3116,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da19ce3a7414be7fd564681108a860e8b42804cba15bd2c977d68104db02c9ef,PodSandboxId:0e65a6c5d7f0de55beca87b361b1040f27680f794d1d2e07347552635d8c03c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431147575935575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6,PodSandboxId:a7299cd2445ad97867ad089abb259159795e8f606fc29cb45f4e48ff4ff904e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431147415834465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0ac2b9aa60995bef25d7bf6ef3a8cf87b2b04e0c399e062b27d03d7fc6ea27,PodSandboxId:a33ecb12d16534d506428791e9973bbb6052bd33933534592c3bbaf9c0d2e63b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431147150857370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=19fa48b1-606c-4b9b-b345-a87c6a74ff89 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:49 multinode-752665 crio[718]: time="2023-08-30 21:33:49.034229749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=401da50b-5a34-4aec-8e71-389f83154980 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:49 multinode-752665 crio[718]: time="2023-08-30 21:33:49.034296174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=401da50b-5a34-4aec-8e71-389f83154980 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:33:49 multinode-752665 crio[718]: time="2023-08-30 21:33:49.034508664Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf933f30c082dd147a28e3fc4d927692a8673a386cb7bd514bd94c1a01d46ff6,PodSandboxId:e7c86423034b2f5c6c8c396f2fa6c43b284e89ef767633fc925377ec4f12089a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431224580130578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:241bf313f387341bce2d683364fb95d1bee1d71e69b2dea6afb549f9b8ae753a,PodSandboxId:593717c725d3bbd668185c574563c8b245ff4e4f4bbd49730af5617c32017cb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431173834226646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea33d3c8c4848dce373619fceaf6342557a81550bca322ce4b2ef864118fd610,PodSandboxId:9101a82bd5053f513c590ad7b709320a0cab44b67070ccb379652c264120948b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431173585682117,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad2ab7af30360a5ff255c037c66aeba977a33c1605e47bf0dd4c94569804b03,PodSandboxId:c51d8687a078385eb0afff74eff6c05d6a52cb4dd587e389775f3281baeb7c3b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431171175685304,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d374a64c3ec60e1b7d28ec245fbcd943ec9c06b24bb3f65f1780f676a3e8dbf,PodSandboxId:81dd8ceaa0372e7939c68d09becf3725f2ce01b7917d02b21da9a4bea482a14b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431168858451488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f
2673b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9232fae9291cc951aac2f1cc31660c6a539eb0019943c57f967ea48cccdc0fa0,PodSandboxId:2f52725abb233d3c6c447f45fca3dfec0524b84dcdc479a67831a357db1c525f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431147867614672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.
container.hash: 831a3116,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da19ce3a7414be7fd564681108a860e8b42804cba15bd2c977d68104db02c9ef,PodSandboxId:0e65a6c5d7f0de55beca87b361b1040f27680f794d1d2e07347552635d8c03c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431147575935575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6,PodSandboxId:a7299cd2445ad97867ad089abb259159795e8f606fc29cb45f4e48ff4ff904e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431147415834465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0ac2b9aa60995bef25d7bf6ef3a8cf87b2b04e0c399e062b27d03d7fc6ea27,PodSandboxId:a33ecb12d16534d506428791e9973bbb6052bd33933534592c3bbaf9c0d2e63b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431147150857370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=401da50b-5a34-4aec-8e71-389f83154980 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	bf933f30c082d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   e7c86423034b2
	241bf313f3873       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      55 seconds ago       Running             coredns                   0                   593717c725d3b
	ea33d3c8c4848       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      55 seconds ago       Running             storage-provisioner       0                   9101a82bd5053
	fad2ab7af3036       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      57 seconds ago       Running             kindnet-cni               0                   c51d8687a0783
	5d374a64c3ec6       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      About a minute ago   Running             kube-proxy                0                   81dd8ceaa0372
	9232fae9291cc       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   2f52725abb233
	da19ce3a7414b       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      About a minute ago   Running             kube-scheduler            0                   0e65a6c5d7f0d
	a13f9a498d111       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      About a minute ago   Running             kube-apiserver            0                   a7299cd2445ad
	cf0ac2b9aa609       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      About a minute ago   Running             kube-controller-manager   0                   a33ecb12d1653
	
	* 
	* ==> coredns [241bf313f387341bce2d683364fb95d1bee1d71e69b2dea6afb549f9b8ae753a] <==
	* [INFO] 10.244.1.2:60589 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202598s
	[INFO] 10.244.0.3:34227 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077278s
	[INFO] 10.244.0.3:43604 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0015961s
	[INFO] 10.244.0.3:35908 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008376s
	[INFO] 10.244.0.3:40687 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084104s
	[INFO] 10.244.0.3:59053 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000960463s
	[INFO] 10.244.0.3:47830 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072773s
	[INFO] 10.244.0.3:49082 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060855s
	[INFO] 10.244.0.3:50840 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076735s
	[INFO] 10.244.1.2:47601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181403s
	[INFO] 10.244.1.2:51766 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120892s
	[INFO] 10.244.1.2:54355 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145958s
	[INFO] 10.244.1.2:52531 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083902s
	[INFO] 10.244.0.3:33369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000071239s
	[INFO] 10.244.0.3:42607 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000040318s
	[INFO] 10.244.0.3:58827 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000027546s
	[INFO] 10.244.0.3:38683 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000035095s
	[INFO] 10.244.1.2:53828 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182675s
	[INFO] 10.244.1.2:43194 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000329054s
	[INFO] 10.244.1.2:55052 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000171262s
	[INFO] 10.244.1.2:42383 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168807s
	[INFO] 10.244.0.3:47544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134446s
	[INFO] 10.244.0.3:49923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008739s
	[INFO] 10.244.0.3:33283 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082411s
	[INFO] 10.244.0.3:48714 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078924s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-752665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-752665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=multinode-752665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T21_32_36_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:32:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-752665
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 21:33:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 21:32:52 +0000   Wed, 30 Aug 2023 21:32:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 21:32:52 +0000   Wed, 30 Aug 2023 21:32:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 21:32:52 +0000   Wed, 30 Aug 2023 21:32:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 21:32:52 +0000   Wed, 30 Aug 2023 21:32:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    multinode-752665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a335ffec70c54a6faef870bcf3c0d15e
	  System UUID:                a335ffec-70c5-4a6f-aef8-70bcf3c0d15e
	  Boot ID:                    0139484c-1cc3-4bfd-bb76-9660d6960e72
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-mzmpx                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-zcppg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 etcd-multinode-752665                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 kindnet-x5kk4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      62s
	  kube-system                 kube-apiserver-multinode-752665             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-multinode-752665    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-vltx5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-multinode-752665             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 60s   kube-proxy       
	  Normal  Starting                 74s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s   kubelet          Node multinode-752665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s   kubelet          Node multinode-752665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s   kubelet          Node multinode-752665 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s   node-controller  Node multinode-752665 event: Registered Node multinode-752665 in Controller
	  Normal  NodeReady                57s   kubelet          Node multinode-752665 status is now: NodeReady
	
	
	Name:               multinode-752665-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-752665-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:33:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-752665-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 21:33:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 21:33:39 +0000   Wed, 30 Aug 2023 21:33:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 21:33:39 +0000   Wed, 30 Aug 2023 21:33:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 21:33:39 +0000   Wed, 30 Aug 2023 21:33:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 21:33:39 +0000   Wed, 30 Aug 2023 21:33:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.46
	  Hostname:    multinode-752665-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4c536b276364b2c8c36c1397595c512
	  System UUID:                c4c536b2-7636-4b2c-8c36-c1397595c512
	  Boot ID:                    2969c860-c01d-40ee-9780-4d6aaa6f43b6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-j4rx4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-4q5fx               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17s
	  kube-system                 kube-proxy-5twl5            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  NodeHasSufficientMemory  17s (x5 over 19s)  kubelet          Node multinode-752665-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x5 over 19s)  kubelet          Node multinode-752665-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x5 over 19s)  kubelet          Node multinode-752665-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                node-controller  Node multinode-752665-m02 event: Registered Node multinode-752665-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-752665-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug30 21:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072444] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.323964] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.435461] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148662] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug30 21:32] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.203557] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.106247] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.149299] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.111089] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.209818] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[ +10.165235] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[  +9.292954] systemd-fstab-generator[1259]: Ignoring "noauto" for root device
	[ +19.592455] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [9232fae9291cc951aac2f1cc31660c6a539eb0019943c57f967ea48cccdc0fa0] <==
	* {"level":"info","ts":"2023-08-30T21:32:29.37308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 switched to configuration voters=(11351028140387178485)"}
	{"level":"info","ts":"2023-08-30T21:32:29.373148Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e50fb330f7c278b","local-member-id":"9d86f3f40f3d97f5","added-peer-id":"9d86f3f40f3d97f5","added-peer-peer-urls":["https://192.168.39.20:2380"]}
	{"level":"info","ts":"2023-08-30T21:32:29.373865Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-30T21:32:29.374072Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.20:2380"}
	{"level":"info","ts":"2023-08-30T21:32:29.374594Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.20:2380"}
	{"level":"info","ts":"2023-08-30T21:32:29.375119Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-30T21:32:29.375049Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9d86f3f40f3d97f5","initial-advertise-peer-urls":["https://192.168.39.20:2380"],"listen-peer-urls":["https://192.168.39.20:2380"],"advertise-client-urls":["https://192.168.39.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-30T21:32:29.649623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-30T21:32:29.649685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-30T21:32:29.649713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 received MsgPreVoteResp from 9d86f3f40f3d97f5 at term 1"}
	{"level":"info","ts":"2023-08-30T21:32:29.649726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 became candidate at term 2"}
	{"level":"info","ts":"2023-08-30T21:32:29.649732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 received MsgVoteResp from 9d86f3f40f3d97f5 at term 2"}
	{"level":"info","ts":"2023-08-30T21:32:29.64974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 became leader at term 2"}
	{"level":"info","ts":"2023-08-30T21:32:29.649747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9d86f3f40f3d97f5 elected leader 9d86f3f40f3d97f5 at term 2"}
	{"level":"info","ts":"2023-08-30T21:32:29.651807Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T21:32:29.652217Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9d86f3f40f3d97f5","local-member-attributes":"{Name:multinode-752665 ClientURLs:[https://192.168.39.20:2379]}","request-path":"/0/members/9d86f3f40f3d97f5/attributes","cluster-id":"e50fb330f7c278b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T21:32:29.652279Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T21:32:29.653045Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e50fb330f7c278b","local-member-id":"9d86f3f40f3d97f5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T21:32:29.653319Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T21:32:29.653367Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T21:32:29.653749Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-30T21:32:29.654241Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T21:32:29.657805Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.20:2379"}
	{"level":"info","ts":"2023-08-30T21:32:29.660231Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T21:32:29.660245Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  21:33:49 up 1 min,  0 users,  load average: 0.80, 0.24, 0.08
	Linux multinode-752665 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [fad2ab7af30360a5ff255c037c66aeba977a33c1605e47bf0dd4c94569804b03] <==
	* I0830 21:32:51.729952       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0830 21:32:51.730032       1 main.go:107] hostIP = 192.168.39.20
	podIP = 192.168.39.20
	I0830 21:32:51.730136       1 main.go:116] setting mtu 1500 for CNI 
	I0830 21:32:51.730219       1 main.go:146] kindnetd IP family: "ipv4"
	I0830 21:32:51.730237       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0830 21:32:52.234146       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0830 21:32:52.234261       1 main.go:227] handling current node
	I0830 21:33:02.247767       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0830 21:33:02.247928       1 main.go:227] handling current node
	I0830 21:33:12.251908       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0830 21:33:12.251953       1 main.go:227] handling current node
	I0830 21:33:22.255688       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0830 21:33:22.255803       1 main.go:227] handling current node
	I0830 21:33:32.277883       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0830 21:33:32.277966       1 main.go:227] handling current node
	I0830 21:33:32.277991       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0830 21:33:32.278009       1 main.go:250] Node multinode-752665-m02 has CIDR [10.244.1.0/24] 
	I0830 21:33:32.278275       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.46 Flags: [] Table: 0} 
	I0830 21:33:42.284098       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0830 21:33:42.284148       1 main.go:227] handling current node
	I0830 21:33:42.284227       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0830 21:33:42.284234       1 main.go:250] Node multinode-752665-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6] <==
	* I0830 21:32:31.746810       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0830 21:32:31.748635       1 shared_informer.go:318] Caches are synced for configmaps
	I0830 21:32:31.749144       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0830 21:32:31.749487       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0830 21:32:31.749794       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0830 21:32:31.756090       1 controller.go:624] quota admission added evaluator for: namespaces
	E0830 21:32:31.772614       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0830 21:32:31.820726       1 cache.go:39] Caches are synced for autoregister controller
	I0830 21:32:31.842129       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0830 21:32:31.975071       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0830 21:32:32.654339       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0830 21:32:32.659596       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0830 21:32:32.659699       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0830 21:32:33.283242       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0830 21:32:33.336376       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0830 21:32:33.480090       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0830 21:32:33.489137       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.20]
	I0830 21:32:33.490134       1 controller.go:624] quota admission added evaluator for: endpoints
	I0830 21:32:33.494276       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0830 21:32:33.716473       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0830 21:32:35.072309       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0830 21:32:35.087760       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0830 21:32:35.098611       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0830 21:32:47.336498       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0830 21:32:47.511942       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [cf0ac2b9aa60995bef25d7bf6ef3a8cf87b2b04e0c399e062b27d03d7fc6ea27] <==
	* I0830 21:32:48.159463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.561µs"
	I0830 21:32:52.772045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.743µs"
	I0830 21:32:52.808771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.982µs"
	I0830 21:32:54.485730       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.965391ms"
	I0830 21:32:54.486387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.756µs"
	I0830 21:32:56.533378       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0830 21:33:32.184013       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-752665-m02\" does not exist"
	I0830 21:33:32.211006       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5twl5"
	I0830 21:33:32.222779       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-752665-m02" podCIDRs=["10.244.1.0/24"]
	I0830 21:33:32.223390       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4q5fx"
	I0830 21:33:36.539886       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-752665-m02"
	I0830 21:33:36.539968       1 event.go:307] "Event occurred" object="multinode-752665-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-752665-m02 event: Registered Node multinode-752665-m02 in Controller"
	I0830 21:33:39.936733       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-752665-m02"
	I0830 21:33:42.615753       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0830 21:33:42.634548       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-j4rx4"
	I0830 21:33:42.649019       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-mzmpx"
	I0830 21:33:42.664052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="47.950242ms"
	I0830 21:33:42.687371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.063174ms"
	I0830 21:33:42.687500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.691µs"
	I0830 21:33:42.703728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.65µs"
	I0830 21:33:42.703903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.635µs"
	I0830 21:33:45.313070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.020853ms"
	I0830 21:33:45.313280       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="42.398µs"
	I0830 21:33:45.638039       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.098078ms"
	I0830 21:33:45.638759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.301µs"
	
	* 
	* ==> kube-proxy [5d374a64c3ec60e1b7d28ec245fbcd943ec9c06b24bb3f65f1780f676a3e8dbf] <==
	* I0830 21:32:49.074534       1 server_others.go:69] "Using iptables proxy"
	I0830 21:32:49.088788       1 node.go:141] Successfully retrieved node IP: 192.168.39.20
	I0830 21:32:49.135372       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0830 21:32:49.135442       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0830 21:32:49.138595       1 server_others.go:152] "Using iptables Proxier"
	I0830 21:32:49.138663       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 21:32:49.138940       1 server.go:846] "Version info" version="v1.28.1"
	I0830 21:32:49.138975       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 21:32:49.139907       1 config.go:188] "Starting service config controller"
	I0830 21:32:49.139960       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 21:32:49.139995       1 config.go:97] "Starting endpoint slice config controller"
	I0830 21:32:49.140026       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 21:32:49.140570       1 config.go:315] "Starting node config controller"
	I0830 21:32:49.140607       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 21:32:49.241062       1 shared_informer.go:318] Caches are synced for node config
	I0830 21:32:49.241119       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 21:32:49.241368       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [da19ce3a7414be7fd564681108a860e8b42804cba15bd2c977d68104db02c9ef] <==
	* W0830 21:32:31.767725       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 21:32:31.767736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0830 21:32:31.767784       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 21:32:31.767792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0830 21:32:31.767817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 21:32:31.767824       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0830 21:32:32.687613       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 21:32:32.687683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0830 21:32:32.704742       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 21:32:32.704794       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 21:32:32.712957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 21:32:32.713002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0830 21:32:32.716332       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 21:32:32.716385       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0830 21:32:32.734367       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0830 21:32:32.734417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0830 21:32:32.752415       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 21:32:32.752439       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0830 21:32:32.963826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 21:32:32.963974       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0830 21:32:32.993475       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 21:32:32.993599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0830 21:32:33.007246       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0830 21:32:33.007327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0830 21:32:35.253381       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 21:31:59 UTC, ends at Wed 2023-08-30 21:33:49 UTC. --
	Aug 30 21:32:47 multinode-752665 kubelet[1266]: I0830 21:32:47.758633    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fdd77f6-856a-4400-b881-210549c588e2-xtables-lock\") pod \"kindnet-x5kk4\" (UID: \"2fdd77f6-856a-4400-b881-210549c588e2\") " pod="kube-system/kindnet-x5kk4"
	Aug 30 21:32:47 multinode-752665 kubelet[1266]: I0830 21:32:47.758675    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgpb\" (UniqueName: \"kubernetes.io/projected/2fdd77f6-856a-4400-b881-210549c588e2-kube-api-access-qdgpb\") pod \"kindnet-x5kk4\" (UID: \"2fdd77f6-856a-4400-b881-210549c588e2\") " pod="kube-system/kindnet-x5kk4"
	Aug 30 21:32:47 multinode-752665 kubelet[1266]: I0830 21:32:47.758697    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fdd77f6-856a-4400-b881-210549c588e2-lib-modules\") pod \"kindnet-x5kk4\" (UID: \"2fdd77f6-856a-4400-b881-210549c588e2\") " pod="kube-system/kindnet-x5kk4"
	Aug 30 21:32:47 multinode-752665 kubelet[1266]: I0830 21:32:47.758716    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24ee271e-5778-4d0c-ab2c-77426f2673b3-xtables-lock\") pod \"kube-proxy-vltx5\" (UID: \"24ee271e-5778-4d0c-ab2c-77426f2673b3\") " pod="kube-system/kube-proxy-vltx5"
	Aug 30 21:32:47 multinode-752665 kubelet[1266]: I0830 21:32:47.758733    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24ee271e-5778-4d0c-ab2c-77426f2673b3-lib-modules\") pod \"kube-proxy-vltx5\" (UID: \"24ee271e-5778-4d0c-ab2c-77426f2673b3\") " pod="kube-system/kube-proxy-vltx5"
	Aug 30 21:32:47 multinode-752665 kubelet[1266]: I0830 21:32:47.758750    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csm5x\" (UniqueName: \"kubernetes.io/projected/24ee271e-5778-4d0c-ab2c-77426f2673b3-kube-api-access-csm5x\") pod \"kube-proxy-vltx5\" (UID: \"24ee271e-5778-4d0c-ab2c-77426f2673b3\") " pod="kube-system/kube-proxy-vltx5"
	Aug 30 21:32:47 multinode-752665 kubelet[1266]: I0830 21:32:47.758778    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2fdd77f6-856a-4400-b881-210549c588e2-cni-cfg\") pod \"kindnet-x5kk4\" (UID: \"2fdd77f6-856a-4400-b881-210549c588e2\") " pod="kube-system/kindnet-x5kk4"
	Aug 30 21:32:47 multinode-752665 kubelet[1266]: I0830 21:32:47.758800    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24ee271e-5778-4d0c-ab2c-77426f2673b3-kube-proxy\") pod \"kube-proxy-vltx5\" (UID: \"24ee271e-5778-4d0c-ab2c-77426f2673b3\") " pod="kube-system/kube-proxy-vltx5"
	Aug 30 21:32:52 multinode-752665 kubelet[1266]: I0830 21:32:52.442518    1266 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vltx5" podStartSLOduration=5.442462412 podCreationTimestamp="2023-08-30 21:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 21:32:49.430442935 +0000 UTC m=+14.376112288" watchObservedRunningTime="2023-08-30 21:32:52.442462412 +0000 UTC m=+17.388131764"
	Aug 30 21:32:52 multinode-752665 kubelet[1266]: I0830 21:32:52.714627    1266 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Aug 30 21:32:52 multinode-752665 kubelet[1266]: I0830 21:32:52.769428    1266 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-x5kk4" podStartSLOduration=5.769365686 podCreationTimestamp="2023-08-30 21:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 21:32:52.444353334 +0000 UTC m=+17.390022687" watchObservedRunningTime="2023-08-30 21:32:52.769365686 +0000 UTC m=+17.715035039"
	Aug 30 21:32:52 multinode-752665 kubelet[1266]: I0830 21:32:52.769629    1266 topology_manager.go:215] "Topology Admit Handler" podUID="4742270b-6c64-411b-bfb6-8c53211aa106" podNamespace="kube-system" podName="coredns-5dd5756b68-zcppg"
	Aug 30 21:32:52 multinode-752665 kubelet[1266]: I0830 21:32:52.788465    1266 topology_manager.go:215] "Topology Admit Handler" podUID="67db5a8a-290a-40a7-b42e-212d99db812a" podNamespace="kube-system" podName="storage-provisioner"
	Aug 30 21:32:52 multinode-752665 kubelet[1266]: I0830 21:32:52.898744    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb6ns\" (UniqueName: \"kubernetes.io/projected/67db5a8a-290a-40a7-b42e-212d99db812a-kube-api-access-kb6ns\") pod \"storage-provisioner\" (UID: \"67db5a8a-290a-40a7-b42e-212d99db812a\") " pod="kube-system/storage-provisioner"
	Aug 30 21:32:52 multinode-752665 kubelet[1266]: I0830 21:32:52.898829    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/67db5a8a-290a-40a7-b42e-212d99db812a-tmp\") pod \"storage-provisioner\" (UID: \"67db5a8a-290a-40a7-b42e-212d99db812a\") " pod="kube-system/storage-provisioner"
	Aug 30 21:32:52 multinode-752665 kubelet[1266]: I0830 21:32:52.898855    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4742270b-6c64-411b-bfb6-8c53211aa106-config-volume\") pod \"coredns-5dd5756b68-zcppg\" (UID: \"4742270b-6c64-411b-bfb6-8c53211aa106\") " pod="kube-system/coredns-5dd5756b68-zcppg"
	Aug 30 21:32:52 multinode-752665 kubelet[1266]: I0830 21:32:52.898875    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69bhj\" (UniqueName: \"kubernetes.io/projected/4742270b-6c64-411b-bfb6-8c53211aa106-kube-api-access-69bhj\") pod \"coredns-5dd5756b68-zcppg\" (UID: \"4742270b-6c64-411b-bfb6-8c53211aa106\") " pod="kube-system/coredns-5dd5756b68-zcppg"
	Aug 30 21:32:54 multinode-752665 kubelet[1266]: I0830 21:32:54.466647    1266 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.466609902 podCreationTimestamp="2023-08-30 21:32:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 21:32:54.452006665 +0000 UTC m=+19.397675997" watchObservedRunningTime="2023-08-30 21:32:54.466609902 +0000 UTC m=+19.412279255"
	Aug 30 21:32:55 multinode-752665 kubelet[1266]: I0830 21:32:55.374026    1266 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zcppg" podStartSLOduration=8.373994487 podCreationTimestamp="2023-08-30 21:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 21:32:54.466843697 +0000 UTC m=+19.412513050" watchObservedRunningTime="2023-08-30 21:32:55.373994487 +0000 UTC m=+20.319663882"
	Aug 30 21:33:35 multinode-752665 kubelet[1266]: E0830 21:33:35.525657    1266 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 21:33:35 multinode-752665 kubelet[1266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 21:33:35 multinode-752665 kubelet[1266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 21:33:35 multinode-752665 kubelet[1266]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 21:33:42 multinode-752665 kubelet[1266]: I0830 21:33:42.670909    1266 topology_manager.go:215] "Topology Admit Handler" podUID="1fd37765-b8e2-4e0c-8e64-71d975f27bf8" podNamespace="default" podName="busybox-5bc68d56bd-mzmpx"
	Aug 30 21:33:42 multinode-752665 kubelet[1266]: I0830 21:33:42.780445    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9dm\" (UniqueName: \"kubernetes.io/projected/1fd37765-b8e2-4e0c-8e64-71d975f27bf8-kube-api-access-sm9dm\") pod \"busybox-5bc68d56bd-mzmpx\" (UID: \"1fd37765-b8e2-4e0c-8e64-71d975f27bf8\") " pod="default/busybox-5bc68d56bd-mzmpx"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-752665 -n multinode-752665
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-752665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (688.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-752665
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-752665
E0830 21:36:49.716112  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:36:57.077525  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-752665: exit status 82 (2m0.848448819s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-752665"  ...
	* Stopping node "multinode-752665"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-752665" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-752665 --wait=true -v=8 --alsologtostderr
E0830 21:38:20.123974  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:39:22.735079  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:41:49.715955  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:41:57.077016  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:43:12.761292  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:44:22.736012  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:45:45.784693  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-752665 --wait=true -v=8 --alsologtostderr: (9m24.505764578s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-752665
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-752665 -n multinode-752665
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-752665 logs -n 25: (1.447394117s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-752665 ssh -n                                                                 | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-752665 cp multinode-752665-m02:/home/docker/cp-test.txt                       | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile458377608/001/cp-test_multinode-752665-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n                                                                 | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-752665 cp multinode-752665-m02:/home/docker/cp-test.txt                       | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665:/home/docker/cp-test_multinode-752665-m02_multinode-752665.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n                                                                 | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n multinode-752665 sudo cat                                       | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | /home/docker/cp-test_multinode-752665-m02_multinode-752665.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-752665 cp multinode-752665-m02:/home/docker/cp-test.txt                       | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m03:/home/docker/cp-test_multinode-752665-m02_multinode-752665-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n                                                                 | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n multinode-752665-m03 sudo cat                                   | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | /home/docker/cp-test_multinode-752665-m02_multinode-752665-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-752665 cp testdata/cp-test.txt                                                | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n                                                                 | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-752665 cp multinode-752665-m03:/home/docker/cp-test.txt                       | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile458377608/001/cp-test_multinode-752665-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n                                                                 | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-752665 cp multinode-752665-m03:/home/docker/cp-test.txt                       | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665:/home/docker/cp-test_multinode-752665-m03_multinode-752665.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n                                                                 | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n multinode-752665 sudo cat                                       | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | /home/docker/cp-test_multinode-752665-m03_multinode-752665.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-752665 cp multinode-752665-m03:/home/docker/cp-test.txt                       | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m02:/home/docker/cp-test_multinode-752665-m03_multinode-752665-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n                                                                 | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n multinode-752665-m02 sudo cat                                   | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | /home/docker/cp-test_multinode-752665-m03_multinode-752665-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-752665 node stop m03                                                          | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	| node    | multinode-752665 node start                                                             | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:35 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-752665                                                                | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:35 UTC |                     |
	| stop    | -p multinode-752665                                                                     | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:35 UTC |                     |
	| start   | -p multinode-752665                                                                     | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC | 30 Aug 23 21:46 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-752665                                                                | multinode-752665 | jenkins | v1.31.2 | 30 Aug 23 21:46 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:37:12
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:37:12.850878  978470 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:37:12.851018  978470 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:37:12.851028  978470 out.go:309] Setting ErrFile to fd 2...
	I0830 21:37:12.851035  978470 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:37:12.851254  978470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 21:37:12.851887  978470 out.go:303] Setting JSON to false
	I0830 21:37:12.853045  978470 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11980,"bootTime":1693419453,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 21:37:12.853110  978470 start.go:138] virtualization: kvm guest
	I0830 21:37:12.855739  978470 out.go:177] * [multinode-752665] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 21:37:12.857369  978470 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 21:37:12.857427  978470 notify.go:220] Checking for updates...
	I0830 21:37:12.858758  978470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:37:12.860486  978470 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:37:12.861807  978470 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:37:12.863174  978470 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 21:37:12.864431  978470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:37:12.866158  978470 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:37:12.866291  978470 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:37:12.866712  978470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:37:12.866813  978470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:37:12.882314  978470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44151
	I0830 21:37:12.882796  978470 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:37:12.883398  978470 main.go:141] libmachine: Using API Version  1
	I0830 21:37:12.883428  978470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:37:12.883858  978470 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:37:12.884050  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:37:12.920561  978470 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 21:37:12.921999  978470 start.go:298] selected driver: kvm2
	I0830 21:37:12.922016  978470 start.go:902] validating driver "kvm2" against &{Name:multinode-752665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.30 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:37:12.922167  978470 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:37:12.922502  978470 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:37:12.922571  978470 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 21:37:12.938403  978470 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 21:37:12.939100  978470 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 21:37:12.939141  978470 cni.go:84] Creating CNI manager for ""
	I0830 21:37:12.939153  978470 cni.go:136] 3 nodes found, recommending kindnet
	I0830 21:37:12.939165  978470 start_flags.go:319] config:
	{Name:multinode-752665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-752665 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.30 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provi
sioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0}
	I0830 21:37:12.939470  978470 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:37:12.941255  978470 out.go:177] * Starting control plane node multinode-752665 in cluster multinode-752665
	I0830 21:37:12.942468  978470 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:37:12.942497  978470 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0830 21:37:12.942505  978470 cache.go:57] Caching tarball of preloaded images
	I0830 21:37:12.942593  978470 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 21:37:12.942603  978470 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 21:37:12.942779  978470 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:37:12.942971  978470 start.go:365] acquiring machines lock for multinode-752665: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 21:37:12.943019  978470 start.go:369] acquired machines lock for "multinode-752665" in 27.155µs
	I0830 21:37:12.943033  978470 start.go:96] Skipping create...Using existing machine configuration
	I0830 21:37:12.943042  978470 fix.go:54] fixHost starting: 
	I0830 21:37:12.943307  978470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:37:12.943338  978470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:37:12.957471  978470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41775
	I0830 21:37:12.957962  978470 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:37:12.958471  978470 main.go:141] libmachine: Using API Version  1
	I0830 21:37:12.958500  978470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:37:12.958828  978470 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:37:12.959015  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:37:12.959173  978470 main.go:141] libmachine: (multinode-752665) Calling .GetState
	I0830 21:37:12.960693  978470 fix.go:102] recreateIfNeeded on multinode-752665: state=Running err=<nil>
	W0830 21:37:12.960710  978470 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 21:37:12.962870  978470 out.go:177] * Updating the running kvm2 "multinode-752665" VM ...
	I0830 21:37:12.964308  978470 machine.go:88] provisioning docker machine ...
	I0830 21:37:12.964329  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:37:12.964525  978470 main.go:141] libmachine: (multinode-752665) Calling .GetMachineName
	I0830 21:37:12.964698  978470 buildroot.go:166] provisioning hostname "multinode-752665"
	I0830 21:37:12.964717  978470 main.go:141] libmachine: (multinode-752665) Calling .GetMachineName
	I0830 21:37:12.964845  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:37:12.967315  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:37:12.967706  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:37:12.967740  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:37:12.967869  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:37:12.968032  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:37:12.968198  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:37:12.968301  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:37:12.968461  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:37:12.968878  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:37:12.968897  978470 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-752665 && echo "multinode-752665" | sudo tee /etc/hostname
	I0830 21:37:31.315984  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:37:37.396073  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:37:40.468050  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:37:46.548102  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:37:49.620025  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:37:55.700088  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:37:58.772042  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:04.852060  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:07.924011  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:14.004038  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:17.076023  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:23.156121  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:26.227999  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:32.308058  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:35.380057  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:41.460065  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:44.532035  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:50.612077  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:53.684051  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:38:59.764041  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:02.836061  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:08.916084  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:11.988104  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:18.068760  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:21.140085  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:27.220077  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:30.292047  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:36.372101  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:39.444094  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:45.524171  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:48.596144  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:54.676020  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:39:57.748012  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:03.828108  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:06.900125  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:12.980051  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:16.052116  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:22.132088  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:25.204034  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:31.284052  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:34.356066  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:40.436031  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:43.508022  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:49.588125  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:52.660023  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:40:58.740067  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:01.812022  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:07.892028  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:10.964085  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:17.044081  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:20.116007  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:26.196125  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:29.268104  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:35.348105  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:38.420034  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:44.500055  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:47.572012  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:53.652048  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:41:56.724118  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:42:02.804042  978470 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.20:22: connect: no route to host
	I0830 21:42:05.806629  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:42:05.806673  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:42:05.808771  978470 machine.go:91] provisioned docker machine in 4m52.844444191s
	I0830 21:42:05.808815  978470 fix.go:56] fixHost completed within 4m52.865774015s
	I0830 21:42:05.808820  978470 start.go:83] releasing machines lock for "multinode-752665", held for 4m52.865792994s
	W0830 21:42:05.808857  978470 start.go:672] error starting host: provision: host is not running
	W0830 21:42:05.809021  978470 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0830 21:42:05.809032  978470 start.go:687] Will try again in 5 seconds ...
	I0830 21:42:10.811130  978470 start.go:365] acquiring machines lock for multinode-752665: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 21:42:10.811303  978470 start.go:369] acquired machines lock for "multinode-752665" in 108.435µs
	I0830 21:42:10.811339  978470 start.go:96] Skipping create...Using existing machine configuration
	I0830 21:42:10.811349  978470 fix.go:54] fixHost starting: 
	I0830 21:42:10.811947  978470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:42:10.811991  978470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:42:10.827158  978470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32885
	I0830 21:42:10.827687  978470 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:42:10.828348  978470 main.go:141] libmachine: Using API Version  1
	I0830 21:42:10.828379  978470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:42:10.828732  978470 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:42:10.828929  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:42:10.829089  978470 main.go:141] libmachine: (multinode-752665) Calling .GetState
	I0830 21:42:10.830865  978470 fix.go:102] recreateIfNeeded on multinode-752665: state=Stopped err=<nil>
	I0830 21:42:10.830891  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	W0830 21:42:10.831035  978470 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 21:42:10.833257  978470 out.go:177] * Restarting existing kvm2 VM for "multinode-752665" ...
	I0830 21:42:10.834778  978470 main.go:141] libmachine: (multinode-752665) Calling .Start
	I0830 21:42:10.834941  978470 main.go:141] libmachine: (multinode-752665) Ensuring networks are active...
	I0830 21:42:10.835679  978470 main.go:141] libmachine: (multinode-752665) Ensuring network default is active
	I0830 21:42:10.836093  978470 main.go:141] libmachine: (multinode-752665) Ensuring network mk-multinode-752665 is active
	I0830 21:42:10.836405  978470 main.go:141] libmachine: (multinode-752665) Getting domain xml...
	I0830 21:42:10.837071  978470 main.go:141] libmachine: (multinode-752665) Creating domain...
	I0830 21:42:12.057103  978470 main.go:141] libmachine: (multinode-752665) Waiting to get IP...
	I0830 21:42:12.057948  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:12.058386  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:12.058494  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:12.058395  979284 retry.go:31] will retry after 271.572819ms: waiting for machine to come up
	I0830 21:42:12.332003  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:12.332459  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:12.332495  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:12.332396  979284 retry.go:31] will retry after 310.092822ms: waiting for machine to come up
	I0830 21:42:12.643788  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:12.644164  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:12.644193  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:12.644111  979284 retry.go:31] will retry after 426.095922ms: waiting for machine to come up
	I0830 21:42:13.071467  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:13.071881  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:13.071908  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:13.071838  979284 retry.go:31] will retry after 455.295234ms: waiting for machine to come up
	I0830 21:42:13.528383  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:13.528848  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:13.528878  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:13.528817  979284 retry.go:31] will retry after 643.843599ms: waiting for machine to come up
	I0830 21:42:14.174680  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:14.175186  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:14.175221  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:14.175144  979284 retry.go:31] will retry after 903.058389ms: waiting for machine to come up
	I0830 21:42:15.079964  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:15.080419  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:15.080449  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:15.080370  979284 retry.go:31] will retry after 794.506009ms: waiting for machine to come up
	I0830 21:42:15.876688  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:15.877071  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:15.877103  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:15.877014  979284 retry.go:31] will retry after 1.021801076s: waiting for machine to come up
	I0830 21:42:16.900300  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:16.900694  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:16.900718  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:16.900652  979284 retry.go:31] will retry after 1.482457961s: waiting for machine to come up
	I0830 21:42:18.384360  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:18.384761  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:18.384785  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:18.384712  979284 retry.go:31] will retry after 1.826292888s: waiting for machine to come up
	I0830 21:42:20.213101  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:20.213619  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:20.213687  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:20.213590  979284 retry.go:31] will retry after 2.08353603s: waiting for machine to come up
	I0830 21:42:22.299156  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:22.299591  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:22.299624  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:22.299541  979284 retry.go:31] will retry after 2.610801583s: waiting for machine to come up
	I0830 21:42:24.913323  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:24.913764  978470 main.go:141] libmachine: (multinode-752665) DBG | unable to find current IP address of domain multinode-752665 in network mk-multinode-752665
	I0830 21:42:24.913797  978470 main.go:141] libmachine: (multinode-752665) DBG | I0830 21:42:24.913704  979284 retry.go:31] will retry after 3.802604556s: waiting for machine to come up
	I0830 21:42:28.719870  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:28.720451  978470 main.go:141] libmachine: (multinode-752665) Found IP for machine: 192.168.39.20
	I0830 21:42:28.720487  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has current primary IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:28.720499  978470 main.go:141] libmachine: (multinode-752665) Reserving static IP address...
	I0830 21:42:28.720936  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "multinode-752665", mac: "52:54:00:73:23:77", ip: "192.168.39.20"} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:28.720979  978470 main.go:141] libmachine: (multinode-752665) Reserved static IP address: 192.168.39.20
	I0830 21:42:28.720993  978470 main.go:141] libmachine: (multinode-752665) DBG | skip adding static IP to network mk-multinode-752665 - found existing host DHCP lease matching {name: "multinode-752665", mac: "52:54:00:73:23:77", ip: "192.168.39.20"}
	I0830 21:42:28.721015  978470 main.go:141] libmachine: (multinode-752665) DBG | Getting to WaitForSSH function...
	I0830 21:42:28.721029  978470 main.go:141] libmachine: (multinode-752665) Waiting for SSH to be available...
	I0830 21:42:28.723215  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:28.723635  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:28.723669  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:28.723886  978470 main.go:141] libmachine: (multinode-752665) DBG | Using SSH client type: external
	I0830 21:42:28.723916  978470 main.go:141] libmachine: (multinode-752665) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa (-rw-------)
	I0830 21:42:28.723950  978470 main.go:141] libmachine: (multinode-752665) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.20 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 21:42:28.723969  978470 main.go:141] libmachine: (multinode-752665) DBG | About to run SSH command:
	I0830 21:42:28.723982  978470 main.go:141] libmachine: (multinode-752665) DBG | exit 0
	I0830 21:42:28.808304  978470 main.go:141] libmachine: (multinode-752665) DBG | SSH cmd err, output: <nil>: 
	I0830 21:42:28.808734  978470 main.go:141] libmachine: (multinode-752665) Calling .GetConfigRaw
	I0830 21:42:28.809451  978470 main.go:141] libmachine: (multinode-752665) Calling .GetIP
	I0830 21:42:28.811888  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:28.812259  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:28.812290  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:28.812599  978470 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:42:28.812787  978470 machine.go:88] provisioning docker machine ...
	I0830 21:42:28.812803  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:42:28.813010  978470 main.go:141] libmachine: (multinode-752665) Calling .GetMachineName
	I0830 21:42:28.813183  978470 buildroot.go:166] provisioning hostname "multinode-752665"
	I0830 21:42:28.813197  978470 main.go:141] libmachine: (multinode-752665) Calling .GetMachineName
	I0830 21:42:28.813375  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:42:28.815614  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:28.815927  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:28.815966  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:28.816095  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:42:28.816278  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:28.816441  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:28.816569  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:42:28.816731  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:42:28.817205  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:42:28.817221  978470 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-752665 && echo "multinode-752665" | sudo tee /etc/hostname
	I0830 21:42:28.941258  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-752665
	
	I0830 21:42:28.941308  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:42:28.944048  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:28.944407  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:28.944461  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:28.944586  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:42:28.944774  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:28.944963  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:28.945088  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:42:28.945273  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:42:28.945664  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:42:28.945681  978470 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-752665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-752665/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-752665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:42:29.063967  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:42:29.064043  978470 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 21:42:29.064086  978470 buildroot.go:174] setting up certificates
	I0830 21:42:29.064124  978470 provision.go:83] configureAuth start
	I0830 21:42:29.064146  978470 main.go:141] libmachine: (multinode-752665) Calling .GetMachineName
	I0830 21:42:29.064477  978470 main.go:141] libmachine: (multinode-752665) Calling .GetIP
	I0830 21:42:29.067179  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.067542  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:29.067577  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.067736  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:42:29.069757  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.070093  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:29.070117  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.070258  978470 provision.go:138] copyHostCerts
	I0830 21:42:29.070287  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:42:29.070316  978470 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 21:42:29.070333  978470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:42:29.070405  978470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 21:42:29.070510  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:42:29.070546  978470 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 21:42:29.070555  978470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:42:29.070592  978470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 21:42:29.070646  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:42:29.070666  978470 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 21:42:29.070672  978470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:42:29.070694  978470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 21:42:29.070743  978470 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.multinode-752665 san=[192.168.39.20 192.168.39.20 localhost 127.0.0.1 minikube multinode-752665]
	I0830 21:42:29.231473  978470 provision.go:172] copyRemoteCerts
	I0830 21:42:29.231535  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:42:29.231567  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:42:29.234324  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.234608  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:29.234634  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.234821  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:42:29.235014  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:29.235191  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:42:29.235297  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:42:29.323407  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 21:42:29.323476  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 21:42:29.347672  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 21:42:29.347744  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 21:42:29.370737  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 21:42:29.370831  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0830 21:42:29.393075  978470 provision.go:86] duration metric: configureAuth took 328.932902ms
	I0830 21:42:29.393100  978470 buildroot.go:189] setting minikube options for container-runtime
	I0830 21:42:29.393362  978470 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:42:29.393463  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:42:29.396045  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.396402  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:29.396440  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.396615  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:42:29.396796  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:29.396948  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:29.397041  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:42:29.397191  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:42:29.397577  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:42:29.397591  978470 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:42:29.696885  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:42:29.696914  978470 machine.go:91] provisioned docker machine in 884.113476ms
	I0830 21:42:29.696927  978470 start.go:300] post-start starting for "multinode-752665" (driver="kvm2")
	I0830 21:42:29.696941  978470 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:42:29.696972  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:42:29.697346  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:42:29.697378  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:42:29.700215  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.700558  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:29.700591  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.700742  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:42:29.700950  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:29.701107  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:42:29.701268  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:42:29.786215  978470 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:42:29.790145  978470 command_runner.go:130] > NAME=Buildroot
	I0830 21:42:29.790169  978470 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0830 21:42:29.790173  978470 command_runner.go:130] > ID=buildroot
	I0830 21:42:29.790181  978470 command_runner.go:130] > VERSION_ID=2021.02.12
	I0830 21:42:29.790189  978470 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0830 21:42:29.790465  978470 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 21:42:29.790515  978470 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 21:42:29.790586  978470 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 21:42:29.790694  978470 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 21:42:29.790710  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /etc/ssl/certs/9626212.pem
	I0830 21:42:29.790825  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 21:42:29.799748  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:42:29.822745  978470 start.go:303] post-start completed in 125.802841ms
	I0830 21:42:29.822774  978470 fix.go:56] fixHost completed within 19.011424793s
	I0830 21:42:29.822804  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:42:29.825633  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.825923  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:29.825957  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.826096  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:42:29.826344  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:29.826530  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:29.826693  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:42:29.826864  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:42:29.827256  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.20 22 <nil> <nil>}
	I0830 21:42:29.827268  978470 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 21:42:29.936512  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693431749.884127595
	
	I0830 21:42:29.936536  978470 fix.go:206] guest clock: 1693431749.884127595
	I0830 21:42:29.936545  978470 fix.go:219] Guest: 2023-08-30 21:42:29.884127595 +0000 UTC Remote: 2023-08-30 21:42:29.822779526 +0000 UTC m=+317.024391026 (delta=61.348069ms)
	I0830 21:42:29.936565  978470 fix.go:190] guest clock delta is within tolerance: 61.348069ms
	I0830 21:42:29.936570  978470 start.go:83] releasing machines lock for "multinode-752665", held for 19.125252222s
	I0830 21:42:29.936618  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:42:29.936904  978470 main.go:141] libmachine: (multinode-752665) Calling .GetIP
	I0830 21:42:29.939304  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.939624  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:29.939658  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.939766  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:42:29.940285  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:42:29.940474  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:42:29.940566  978470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:42:29.940626  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:42:29.940719  978470 ssh_runner.go:195] Run: cat /version.json
	I0830 21:42:29.940751  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:42:29.943188  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.943275  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.943640  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:29.943670  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.943697  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:29.943715  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:29.943865  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:42:29.943984  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:42:29.944036  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:29.944121  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:42:29.944190  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:42:29.944263  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:42:29.944318  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:42:29.944374  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:42:30.049179  978470 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0830 21:42:30.050289  978470 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1692613578-17086", "minikube_version": "v1.31.2", "commit": "9dc31f0284dc1a8a35859648c60120733f0f8296"}
	I0830 21:42:30.050508  978470 ssh_runner.go:195] Run: systemctl --version
	I0830 21:42:30.057006  978470 command_runner.go:130] > systemd 247 (247)
	I0830 21:42:30.057055  978470 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0830 21:42:30.057187  978470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:42:30.203692  978470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 21:42:30.210318  978470 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0830 21:42:30.210850  978470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 21:42:30.210921  978470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:42:30.226533  978470 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0830 21:42:30.226581  978470 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 21:42:30.226612  978470 start.go:466] detecting cgroup driver to use...
	I0830 21:42:30.226683  978470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:42:30.243694  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:42:30.255530  978470 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:42:30.255582  978470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:42:30.268715  978470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:42:30.282999  978470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:42:30.298243  978470 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0830 21:42:30.405348  978470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:42:30.522595  978470 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0830 21:42:30.522638  978470 docker.go:212] disabling docker service ...
	I0830 21:42:30.522713  978470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:42:30.535916  978470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:42:30.547704  978470 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0830 21:42:30.547808  978470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:42:30.561509  978470 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0830 21:42:30.646963  978470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:42:30.660381  978470 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0830 21:42:30.660829  978470 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0830 21:42:30.746005  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:42:30.758375  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:42:30.774859  978470 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0830 21:42:30.775232  978470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 21:42:30.775298  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:42:30.784172  978470 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:42:30.784222  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:42:30.792732  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:42:30.801220  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:42:30.810994  978470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:42:30.821432  978470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:42:30.829201  978470 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 21:42:30.829245  978470 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 21:42:30.829302  978470 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 21:42:30.840855  978470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:42:30.850359  978470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:42:30.971259  978470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:42:31.138726  978470 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:42:31.138815  978470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:42:31.146373  978470 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0830 21:42:31.146398  978470 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0830 21:42:31.146438  978470 command_runner.go:130] > Device: 16h/22d	Inode: 747         Links: 1
	I0830 21:42:31.146458  978470 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 21:42:31.146467  978470 command_runner.go:130] > Access: 2023-08-30 21:42:31.073476286 +0000
	I0830 21:42:31.146484  978470 command_runner.go:130] > Modify: 2023-08-30 21:42:31.073476286 +0000
	I0830 21:42:31.146496  978470 command_runner.go:130] > Change: 2023-08-30 21:42:31.073476286 +0000
	I0830 21:42:31.146506  978470 command_runner.go:130] >  Birth: -
	I0830 21:42:31.146730  978470 start.go:534] Will wait 60s for crictl version
	I0830 21:42:31.146780  978470 ssh_runner.go:195] Run: which crictl
	I0830 21:42:31.150871  978470 command_runner.go:130] > /usr/bin/crictl
	I0830 21:42:31.150957  978470 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:42:31.190801  978470 command_runner.go:130] > Version:  0.1.0
	I0830 21:42:31.190820  978470 command_runner.go:130] > RuntimeName:  cri-o
	I0830 21:42:31.190825  978470 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0830 21:42:31.190830  978470 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0830 21:42:31.190933  978470 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 21:42:31.191034  978470 ssh_runner.go:195] Run: crio --version
	I0830 21:42:31.234051  978470 command_runner.go:130] > crio version 1.24.1
	I0830 21:42:31.234079  978470 command_runner.go:130] > Version:          1.24.1
	I0830 21:42:31.234091  978470 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0830 21:42:31.234099  978470 command_runner.go:130] > GitTreeState:     dirty
	I0830 21:42:31.234107  978470 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0830 21:42:31.234114  978470 command_runner.go:130] > GoVersion:        go1.19.9
	I0830 21:42:31.234120  978470 command_runner.go:130] > Compiler:         gc
	I0830 21:42:31.234130  978470 command_runner.go:130] > Platform:         linux/amd64
	I0830 21:42:31.234154  978470 command_runner.go:130] > Linkmode:         dynamic
	I0830 21:42:31.234180  978470 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 21:42:31.234190  978470 command_runner.go:130] > SeccompEnabled:   true
	I0830 21:42:31.234199  978470 command_runner.go:130] > AppArmorEnabled:  false
	I0830 21:42:31.235484  978470 ssh_runner.go:195] Run: crio --version
	I0830 21:42:31.282269  978470 command_runner.go:130] > crio version 1.24.1
	I0830 21:42:31.282296  978470 command_runner.go:130] > Version:          1.24.1
	I0830 21:42:31.282331  978470 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0830 21:42:31.282338  978470 command_runner.go:130] > GitTreeState:     dirty
	I0830 21:42:31.282346  978470 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0830 21:42:31.282354  978470 command_runner.go:130] > GoVersion:        go1.19.9
	I0830 21:42:31.282363  978470 command_runner.go:130] > Compiler:         gc
	I0830 21:42:31.282375  978470 command_runner.go:130] > Platform:         linux/amd64
	I0830 21:42:31.282384  978470 command_runner.go:130] > Linkmode:         dynamic
	I0830 21:42:31.282401  978470 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 21:42:31.282412  978470 command_runner.go:130] > SeccompEnabled:   true
	I0830 21:42:31.282419  978470 command_runner.go:130] > AppArmorEnabled:  false
	I0830 21:42:31.286168  978470 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 21:42:31.287679  978470 main.go:141] libmachine: (multinode-752665) Calling .GetIP
	I0830 21:42:31.290121  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:31.290421  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:42:31.290452  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:42:31.290639  978470 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 21:42:31.294623  978470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:42:31.308701  978470 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:42:31.308763  978470 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:42:31.339157  978470 command_runner.go:130] > {
	I0830 21:42:31.339180  978470 command_runner.go:130] >   "images": [
	I0830 21:42:31.339186  978470 command_runner.go:130] >     {
	I0830 21:42:31.339197  978470 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0830 21:42:31.339204  978470 command_runner.go:130] >       "repoTags": [
	I0830 21:42:31.339211  978470 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0830 21:42:31.339217  978470 command_runner.go:130] >       ],
	I0830 21:42:31.339223  978470 command_runner.go:130] >       "repoDigests": [
	I0830 21:42:31.339242  978470 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0830 21:42:31.339259  978470 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0830 21:42:31.339266  978470 command_runner.go:130] >       ],
	I0830 21:42:31.339287  978470 command_runner.go:130] >       "size": "750414",
	I0830 21:42:31.339301  978470 command_runner.go:130] >       "uid": {
	I0830 21:42:31.339311  978470 command_runner.go:130] >         "value": "65535"
	I0830 21:42:31.339317  978470 command_runner.go:130] >       },
	I0830 21:42:31.339325  978470 command_runner.go:130] >       "username": "",
	I0830 21:42:31.339338  978470 command_runner.go:130] >       "spec": null
	I0830 21:42:31.339343  978470 command_runner.go:130] >     }
	I0830 21:42:31.339349  978470 command_runner.go:130] >   ]
	I0830 21:42:31.339354  978470 command_runner.go:130] > }
	I0830 21:42:31.339554  978470 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 21:42:31.339621  978470 ssh_runner.go:195] Run: which lz4
	I0830 21:42:31.344075  978470 command_runner.go:130] > /usr/bin/lz4
	I0830 21:42:31.344100  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0830 21:42:31.344185  978470 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 21:42:31.348488  978470 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 21:42:31.348535  978470 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 21:42:31.348559  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 21:42:33.165575  978470 crio.go:444] Took 1.821408 seconds to copy over tarball
	I0830 21:42:33.165691  978470 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 21:42:36.059596  978470 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.89387283s)
	I0830 21:42:36.059632  978470 crio.go:451] Took 2.894002 seconds to extract the tarball
	I0830 21:42:36.059651  978470 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 21:42:36.101006  978470 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:42:36.145901  978470 command_runner.go:130] > {
	I0830 21:42:36.145928  978470 command_runner.go:130] >   "images": [
	I0830 21:42:36.145934  978470 command_runner.go:130] >     {
	I0830 21:42:36.145948  978470 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0830 21:42:36.145968  978470 command_runner.go:130] >       "repoTags": [
	I0830 21:42:36.145979  978470 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0830 21:42:36.145985  978470 command_runner.go:130] >       ],
	I0830 21:42:36.145993  978470 command_runner.go:130] >       "repoDigests": [
	I0830 21:42:36.146006  978470 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0830 21:42:36.146027  978470 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0830 21:42:36.146034  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146045  978470 command_runner.go:130] >       "size": "65249302",
	I0830 21:42:36.146055  978470 command_runner.go:130] >       "uid": null,
	I0830 21:42:36.146065  978470 command_runner.go:130] >       "username": "",
	I0830 21:42:36.146091  978470 command_runner.go:130] >       "spec": null
	I0830 21:42:36.146104  978470 command_runner.go:130] >     },
	I0830 21:42:36.146115  978470 command_runner.go:130] >     {
	I0830 21:42:36.146129  978470 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0830 21:42:36.146139  978470 command_runner.go:130] >       "repoTags": [
	I0830 21:42:36.146150  978470 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0830 21:42:36.146237  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146277  978470 command_runner.go:130] >       "repoDigests": [
	I0830 21:42:36.146293  978470 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0830 21:42:36.146310  978470 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0830 21:42:36.146319  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146327  978470 command_runner.go:130] >       "size": "31470524",
	I0830 21:42:36.146337  978470 command_runner.go:130] >       "uid": null,
	I0830 21:42:36.146359  978470 command_runner.go:130] >       "username": "",
	I0830 21:42:36.146369  978470 command_runner.go:130] >       "spec": null
	I0830 21:42:36.146374  978470 command_runner.go:130] >     },
	I0830 21:42:36.146378  978470 command_runner.go:130] >     {
	I0830 21:42:36.146388  978470 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0830 21:42:36.146398  978470 command_runner.go:130] >       "repoTags": [
	I0830 21:42:36.146408  978470 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0830 21:42:36.146421  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146431  978470 command_runner.go:130] >       "repoDigests": [
	I0830 21:42:36.146448  978470 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0830 21:42:36.146461  978470 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0830 21:42:36.146468  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146475  978470 command_runner.go:130] >       "size": "53621675",
	I0830 21:42:36.146485  978470 command_runner.go:130] >       "uid": null,
	I0830 21:42:36.146496  978470 command_runner.go:130] >       "username": "",
	I0830 21:42:36.146502  978470 command_runner.go:130] >       "spec": null
	I0830 21:42:36.146512  978470 command_runner.go:130] >     },
	I0830 21:42:36.146518  978470 command_runner.go:130] >     {
	I0830 21:42:36.146531  978470 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0830 21:42:36.146541  978470 command_runner.go:130] >       "repoTags": [
	I0830 21:42:36.146549  978470 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0830 21:42:36.146555  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146561  978470 command_runner.go:130] >       "repoDigests": [
	I0830 21:42:36.146578  978470 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0830 21:42:36.146593  978470 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0830 21:42:36.146605  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146615  978470 command_runner.go:130] >       "size": "295456551",
	I0830 21:42:36.146624  978470 command_runner.go:130] >       "uid": {
	I0830 21:42:36.146633  978470 command_runner.go:130] >         "value": "0"
	I0830 21:42:36.146671  978470 command_runner.go:130] >       },
	I0830 21:42:36.146685  978470 command_runner.go:130] >       "username": "",
	I0830 21:42:36.146692  978470 command_runner.go:130] >       "spec": null
	I0830 21:42:36.146701  978470 command_runner.go:130] >     },
	I0830 21:42:36.146706  978470 command_runner.go:130] >     {
	I0830 21:42:36.146718  978470 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0830 21:42:36.146726  978470 command_runner.go:130] >       "repoTags": [
	I0830 21:42:36.146735  978470 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0830 21:42:36.146747  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146752  978470 command_runner.go:130] >       "repoDigests": [
	I0830 21:42:36.146765  978470 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0830 21:42:36.146778  978470 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0830 21:42:36.146784  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146791  978470 command_runner.go:130] >       "size": "126972880",
	I0830 21:42:36.146803  978470 command_runner.go:130] >       "uid": {
	I0830 21:42:36.146810  978470 command_runner.go:130] >         "value": "0"
	I0830 21:42:36.146819  978470 command_runner.go:130] >       },
	I0830 21:42:36.146824  978470 command_runner.go:130] >       "username": "",
	I0830 21:42:36.146831  978470 command_runner.go:130] >       "spec": null
	I0830 21:42:36.146834  978470 command_runner.go:130] >     },
	I0830 21:42:36.146840  978470 command_runner.go:130] >     {
	I0830 21:42:36.146849  978470 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0830 21:42:36.146860  978470 command_runner.go:130] >       "repoTags": [
	I0830 21:42:36.146872  978470 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0830 21:42:36.146880  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146887  978470 command_runner.go:130] >       "repoDigests": [
	I0830 21:42:36.146901  978470 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0830 21:42:36.146915  978470 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0830 21:42:36.146924  978470 command_runner.go:130] >       ],
	I0830 21:42:36.146930  978470 command_runner.go:130] >       "size": "123163446",
	I0830 21:42:36.146935  978470 command_runner.go:130] >       "uid": {
	I0830 21:42:36.146942  978470 command_runner.go:130] >         "value": "0"
	I0830 21:42:36.146955  978470 command_runner.go:130] >       },
	I0830 21:42:36.146965  978470 command_runner.go:130] >       "username": "",
	I0830 21:42:36.146975  978470 command_runner.go:130] >       "spec": null
	I0830 21:42:36.146988  978470 command_runner.go:130] >     },
	I0830 21:42:36.146997  978470 command_runner.go:130] >     {
	I0830 21:42:36.147007  978470 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0830 21:42:36.147015  978470 command_runner.go:130] >       "repoTags": [
	I0830 21:42:36.147020  978470 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0830 21:42:36.147024  978470 command_runner.go:130] >       ],
	I0830 21:42:36.147028  978470 command_runner.go:130] >       "repoDigests": [
	I0830 21:42:36.147038  978470 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0830 21:42:36.147045  978470 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0830 21:42:36.147051  978470 command_runner.go:130] >       ],
	I0830 21:42:36.147055  978470 command_runner.go:130] >       "size": "74680215",
	I0830 21:42:36.147061  978470 command_runner.go:130] >       "uid": null,
	I0830 21:42:36.147065  978470 command_runner.go:130] >       "username": "",
	I0830 21:42:36.147070  978470 command_runner.go:130] >       "spec": null
	I0830 21:42:36.147074  978470 command_runner.go:130] >     },
	I0830 21:42:36.147081  978470 command_runner.go:130] >     {
	I0830 21:42:36.147089  978470 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0830 21:42:36.147097  978470 command_runner.go:130] >       "repoTags": [
	I0830 21:42:36.147102  978470 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0830 21:42:36.147106  978470 command_runner.go:130] >       ],
	I0830 21:42:36.147110  978470 command_runner.go:130] >       "repoDigests": [
	I0830 21:42:36.147119  978470 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0830 21:42:36.147147  978470 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0830 21:42:36.147154  978470 command_runner.go:130] >       ],
	I0830 21:42:36.147158  978470 command_runner.go:130] >       "size": "61477686",
	I0830 21:42:36.147162  978470 command_runner.go:130] >       "uid": {
	I0830 21:42:36.147166  978470 command_runner.go:130] >         "value": "0"
	I0830 21:42:36.147170  978470 command_runner.go:130] >       },
	I0830 21:42:36.147176  978470 command_runner.go:130] >       "username": "",
	I0830 21:42:36.147182  978470 command_runner.go:130] >       "spec": null
	I0830 21:42:36.147186  978470 command_runner.go:130] >     },
	I0830 21:42:36.147191  978470 command_runner.go:130] >     {
	I0830 21:42:36.147197  978470 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0830 21:42:36.147205  978470 command_runner.go:130] >       "repoTags": [
	I0830 21:42:36.147210  978470 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0830 21:42:36.147214  978470 command_runner.go:130] >       ],
	I0830 21:42:36.147218  978470 command_runner.go:130] >       "repoDigests": [
	I0830 21:42:36.147225  978470 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0830 21:42:36.147235  978470 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0830 21:42:36.147238  978470 command_runner.go:130] >       ],
	I0830 21:42:36.147242  978470 command_runner.go:130] >       "size": "750414",
	I0830 21:42:36.147246  978470 command_runner.go:130] >       "uid": {
	I0830 21:42:36.147250  978470 command_runner.go:130] >         "value": "65535"
	I0830 21:42:36.147256  978470 command_runner.go:130] >       },
	I0830 21:42:36.147260  978470 command_runner.go:130] >       "username": "",
	I0830 21:42:36.147265  978470 command_runner.go:130] >       "spec": null
	I0830 21:42:36.147268  978470 command_runner.go:130] >     }
	I0830 21:42:36.147273  978470 command_runner.go:130] >   ]
	I0830 21:42:36.147276  978470 command_runner.go:130] > }
	I0830 21:42:36.147404  978470 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 21:42:36.147417  978470 cache_images.go:84] Images are preloaded, skipping loading
	I0830 21:42:36.147481  978470 ssh_runner.go:195] Run: crio config
	I0830 21:42:36.198166  978470 command_runner.go:130] ! time="2023-08-30 21:42:36.145221488Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0830 21:42:36.198197  978470 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0830 21:42:36.205573  978470 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0830 21:42:36.205598  978470 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0830 21:42:36.205608  978470 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0830 21:42:36.205613  978470 command_runner.go:130] > #
	I0830 21:42:36.205623  978470 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0830 21:42:36.205633  978470 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0830 21:42:36.205643  978470 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0830 21:42:36.205654  978470 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0830 21:42:36.205661  978470 command_runner.go:130] > # reload'.
	I0830 21:42:36.205678  978470 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0830 21:42:36.205692  978470 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0830 21:42:36.205704  978470 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0830 21:42:36.205714  978470 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0830 21:42:36.205721  978470 command_runner.go:130] > [crio]
	I0830 21:42:36.205733  978470 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0830 21:42:36.205745  978470 command_runner.go:130] > # containers images, in this directory.
	I0830 21:42:36.205753  978470 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0830 21:42:36.205774  978470 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0830 21:42:36.205789  978470 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0830 21:42:36.205800  978470 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0830 21:42:36.205812  978470 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0830 21:42:36.205823  978470 command_runner.go:130] > storage_driver = "overlay"
	I0830 21:42:36.205833  978470 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0830 21:42:36.205846  978470 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0830 21:42:36.205854  978470 command_runner.go:130] > storage_option = [
	I0830 21:42:36.205863  978470 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0830 21:42:36.205869  978470 command_runner.go:130] > ]
	I0830 21:42:36.205881  978470 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0830 21:42:36.205893  978470 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0830 21:42:36.205901  978470 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0830 21:42:36.205914  978470 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0830 21:42:36.205925  978470 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0830 21:42:36.205935  978470 command_runner.go:130] > # always happen on a node reboot
	I0830 21:42:36.205944  978470 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0830 21:42:36.205957  978470 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0830 21:42:36.205977  978470 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0830 21:42:36.205997  978470 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0830 21:42:36.206010  978470 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0830 21:42:36.206027  978470 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0830 21:42:36.206045  978470 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0830 21:42:36.206054  978470 command_runner.go:130] > # internal_wipe = true
	I0830 21:42:36.206064  978470 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0830 21:42:36.206078  978470 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0830 21:42:36.206090  978470 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0830 21:42:36.206104  978470 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0830 21:42:36.206117  978470 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0830 21:42:36.206126  978470 command_runner.go:130] > [crio.api]
	I0830 21:42:36.206137  978470 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0830 21:42:36.206148  978470 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0830 21:42:36.206157  978470 command_runner.go:130] > # IP address on which the stream server will listen.
	I0830 21:42:36.206165  978470 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0830 21:42:36.206179  978470 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0830 21:42:36.206189  978470 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0830 21:42:36.206202  978470 command_runner.go:130] > # stream_port = "0"
	I0830 21:42:36.206215  978470 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0830 21:42:36.206225  978470 command_runner.go:130] > # stream_enable_tls = false
	I0830 21:42:36.206237  978470 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0830 21:42:36.206246  978470 command_runner.go:130] > # stream_idle_timeout = ""
	I0830 21:42:36.206258  978470 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0830 21:42:36.206272  978470 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0830 21:42:36.206282  978470 command_runner.go:130] > # minutes.
	I0830 21:42:36.206291  978470 command_runner.go:130] > # stream_tls_cert = ""
	I0830 21:42:36.206304  978470 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0830 21:42:36.206318  978470 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0830 21:42:36.206328  978470 command_runner.go:130] > # stream_tls_key = ""
	I0830 21:42:36.206342  978470 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0830 21:42:36.206355  978470 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0830 21:42:36.206368  978470 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0830 21:42:36.206378  978470 command_runner.go:130] > # stream_tls_ca = ""
	I0830 21:42:36.206399  978470 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 21:42:36.206409  978470 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0830 21:42:36.206427  978470 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 21:42:36.206438  978470 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0830 21:42:36.206473  978470 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0830 21:42:36.206487  978470 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0830 21:42:36.206494  978470 command_runner.go:130] > [crio.runtime]
	I0830 21:42:36.206505  978470 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0830 21:42:36.206517  978470 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0830 21:42:36.206528  978470 command_runner.go:130] > # "nofile=1024:2048"
	I0830 21:42:36.206542  978470 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0830 21:42:36.206552  978470 command_runner.go:130] > # default_ulimits = [
	I0830 21:42:36.206561  978470 command_runner.go:130] > # ]
	I0830 21:42:36.206572  978470 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0830 21:42:36.206581  978470 command_runner.go:130] > # no_pivot = false
	I0830 21:42:36.206592  978470 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0830 21:42:36.206606  978470 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0830 21:42:36.206618  978470 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0830 21:42:36.206631  978470 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0830 21:42:36.206643  978470 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0830 21:42:36.206661  978470 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 21:42:36.206683  978470 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0830 21:42:36.206694  978470 command_runner.go:130] > # Cgroup setting for conmon
	I0830 21:42:36.206709  978470 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0830 21:42:36.206720  978470 command_runner.go:130] > conmon_cgroup = "pod"
	I0830 21:42:36.206732  978470 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0830 21:42:36.206744  978470 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0830 21:42:36.206759  978470 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 21:42:36.206769  978470 command_runner.go:130] > conmon_env = [
	I0830 21:42:36.206783  978470 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0830 21:42:36.206791  978470 command_runner.go:130] > ]
	I0830 21:42:36.206802  978470 command_runner.go:130] > # Additional environment variables to set for all the
	I0830 21:42:36.206814  978470 command_runner.go:130] > # containers. These are overridden if set in the
	I0830 21:42:36.206827  978470 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0830 21:42:36.206837  978470 command_runner.go:130] > # default_env = [
	I0830 21:42:36.206845  978470 command_runner.go:130] > # ]
	I0830 21:42:36.206856  978470 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0830 21:42:36.206866  978470 command_runner.go:130] > # selinux = false
	I0830 21:42:36.206885  978470 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0830 21:42:36.206899  978470 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0830 21:42:36.206913  978470 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0830 21:42:36.206923  978470 command_runner.go:130] > # seccomp_profile = ""
	I0830 21:42:36.206936  978470 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0830 21:42:36.206949  978470 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0830 21:42:36.206963  978470 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0830 21:42:36.206973  978470 command_runner.go:130] > # which might increase security.
	I0830 21:42:36.206984  978470 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0830 21:42:36.206999  978470 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0830 21:42:36.207013  978470 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0830 21:42:36.207028  978470 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0830 21:42:36.207042  978470 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0830 21:42:36.207056  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:42:36.207068  978470 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0830 21:42:36.207081  978470 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0830 21:42:36.207092  978470 command_runner.go:130] > # the cgroup blockio controller.
	I0830 21:42:36.207100  978470 command_runner.go:130] > # blockio_config_file = ""
	I0830 21:42:36.207118  978470 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0830 21:42:36.207128  978470 command_runner.go:130] > # irqbalance daemon.
	I0830 21:42:36.207139  978470 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0830 21:42:36.207154  978470 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0830 21:42:36.207166  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:42:36.207176  978470 command_runner.go:130] > # rdt_config_file = ""
	I0830 21:42:36.207185  978470 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0830 21:42:36.207196  978470 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0830 21:42:36.207209  978470 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0830 21:42:36.207220  978470 command_runner.go:130] > # separate_pull_cgroup = ""
	I0830 21:42:36.207234  978470 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0830 21:42:36.207248  978470 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0830 21:42:36.207259  978470 command_runner.go:130] > # will be added.
	I0830 21:42:36.207270  978470 command_runner.go:130] > # default_capabilities = [
	I0830 21:42:36.207277  978470 command_runner.go:130] > # 	"CHOWN",
	I0830 21:42:36.207285  978470 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0830 21:42:36.207292  978470 command_runner.go:130] > # 	"FSETID",
	I0830 21:42:36.207299  978470 command_runner.go:130] > # 	"FOWNER",
	I0830 21:42:36.207312  978470 command_runner.go:130] > # 	"SETGID",
	I0830 21:42:36.207322  978470 command_runner.go:130] > # 	"SETUID",
	I0830 21:42:36.207328  978470 command_runner.go:130] > # 	"SETPCAP",
	I0830 21:42:36.207338  978470 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0830 21:42:36.207354  978470 command_runner.go:130] > # 	"KILL",
	I0830 21:42:36.207362  978470 command_runner.go:130] > # ]
	I0830 21:42:36.207373  978470 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0830 21:42:36.207387  978470 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 21:42:36.207397  978470 command_runner.go:130] > # default_sysctls = [
	I0830 21:42:36.207405  978470 command_runner.go:130] > # ]
	I0830 21:42:36.207415  978470 command_runner.go:130] > # List of devices on the host that a
	I0830 21:42:36.207428  978470 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0830 21:42:36.207438  978470 command_runner.go:130] > # allowed_devices = [
	I0830 21:42:36.207447  978470 command_runner.go:130] > # 	"/dev/fuse",
	I0830 21:42:36.207453  978470 command_runner.go:130] > # ]
	I0830 21:42:36.207464  978470 command_runner.go:130] > # List of additional devices. specified as
	I0830 21:42:36.207479  978470 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0830 21:42:36.207491  978470 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0830 21:42:36.207538  978470 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 21:42:36.207549  978470 command_runner.go:130] > # additional_devices = [
	I0830 21:42:36.207555  978470 command_runner.go:130] > # ]
	I0830 21:42:36.207567  978470 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0830 21:42:36.207577  978470 command_runner.go:130] > # cdi_spec_dirs = [
	I0830 21:42:36.207585  978470 command_runner.go:130] > # 	"/etc/cdi",
	I0830 21:42:36.207595  978470 command_runner.go:130] > # 	"/var/run/cdi",
	I0830 21:42:36.207601  978470 command_runner.go:130] > # ]
	I0830 21:42:36.207615  978470 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0830 21:42:36.207628  978470 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0830 21:42:36.207638  978470 command_runner.go:130] > # Defaults to false.
	I0830 21:42:36.207647  978470 command_runner.go:130] > # device_ownership_from_security_context = false
	I0830 21:42:36.207661  978470 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0830 21:42:36.207679  978470 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0830 21:42:36.207689  978470 command_runner.go:130] > # hooks_dir = [
	I0830 21:42:36.207699  978470 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0830 21:42:36.207707  978470 command_runner.go:130] > # ]
	I0830 21:42:36.207718  978470 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0830 21:42:36.207736  978470 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0830 21:42:36.207749  978470 command_runner.go:130] > # its default mounts from the following two files:
	I0830 21:42:36.207758  978470 command_runner.go:130] > #
	I0830 21:42:36.207779  978470 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0830 21:42:36.207794  978470 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0830 21:42:36.207806  978470 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0830 21:42:36.207814  978470 command_runner.go:130] > #
	I0830 21:42:36.207826  978470 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0830 21:42:36.207840  978470 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0830 21:42:36.207854  978470 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0830 21:42:36.207866  978470 command_runner.go:130] > #      only add mounts it finds in this file.
	I0830 21:42:36.207874  978470 command_runner.go:130] > #
	I0830 21:42:36.207882  978470 command_runner.go:130] > # default_mounts_file = ""
	I0830 21:42:36.207895  978470 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0830 21:42:36.207909  978470 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0830 21:42:36.207919  978470 command_runner.go:130] > pids_limit = 1024
	I0830 21:42:36.207934  978470 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0830 21:42:36.207947  978470 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0830 21:42:36.207966  978470 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0830 21:42:36.207983  978470 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0830 21:42:36.207993  978470 command_runner.go:130] > # log_size_max = -1
	I0830 21:42:36.208008  978470 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0830 21:42:36.208019  978470 command_runner.go:130] > # log_to_journald = false
	I0830 21:42:36.208033  978470 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0830 21:42:36.208045  978470 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0830 21:42:36.208057  978470 command_runner.go:130] > # Path to directory for container attach sockets.
	I0830 21:42:36.208069  978470 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0830 21:42:36.208081  978470 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0830 21:42:36.208092  978470 command_runner.go:130] > # bind_mount_prefix = ""
	I0830 21:42:36.208105  978470 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0830 21:42:36.208115  978470 command_runner.go:130] > # read_only = false
	I0830 21:42:36.208129  978470 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0830 21:42:36.208142  978470 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0830 21:42:36.208151  978470 command_runner.go:130] > # live configuration reload.
	I0830 21:42:36.208161  978470 command_runner.go:130] > # log_level = "info"
	I0830 21:42:36.208173  978470 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0830 21:42:36.208187  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:42:36.208220  978470 command_runner.go:130] > # log_filter = ""
	I0830 21:42:36.208239  978470 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0830 21:42:36.208254  978470 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0830 21:42:36.208264  978470 command_runner.go:130] > # separated by comma.
	I0830 21:42:36.208274  978470 command_runner.go:130] > # uid_mappings = ""
	I0830 21:42:36.208286  978470 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0830 21:42:36.208300  978470 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0830 21:42:36.208310  978470 command_runner.go:130] > # separated by comma.
	I0830 21:42:36.208318  978470 command_runner.go:130] > # gid_mappings = ""
	I0830 21:42:36.208334  978470 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0830 21:42:36.208348  978470 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 21:42:36.208362  978470 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 21:42:36.208372  978470 command_runner.go:130] > # minimum_mappable_uid = -1
	I0830 21:42:36.208386  978470 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0830 21:42:36.208400  978470 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 21:42:36.208412  978470 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 21:42:36.208423  978470 command_runner.go:130] > # minimum_mappable_gid = -1
	I0830 21:42:36.208437  978470 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0830 21:42:36.208450  978470 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0830 21:42:36.208463  978470 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0830 21:42:36.208473  978470 command_runner.go:130] > # ctr_stop_timeout = 30
	I0830 21:42:36.208486  978470 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0830 21:42:36.208497  978470 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0830 21:42:36.208509  978470 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0830 21:42:36.208520  978470 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0830 21:42:36.208535  978470 command_runner.go:130] > drop_infra_ctr = false
	I0830 21:42:36.208549  978470 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0830 21:42:36.208562  978470 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0830 21:42:36.208577  978470 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0830 21:42:36.208585  978470 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0830 21:42:36.208598  978470 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0830 21:42:36.208610  978470 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0830 21:42:36.208621  978470 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0830 21:42:36.208634  978470 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0830 21:42:36.208644  978470 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0830 21:42:36.208662  978470 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0830 21:42:36.208681  978470 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0830 21:42:36.208696  978470 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0830 21:42:36.208706  978470 command_runner.go:130] > # default_runtime = "runc"
	I0830 21:42:36.208718  978470 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0830 21:42:36.208734  978470 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0830 21:42:36.208756  978470 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0830 21:42:36.208768  978470 command_runner.go:130] > # creation as a file is not desired either.
	I0830 21:42:36.208786  978470 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0830 21:42:36.208798  978470 command_runner.go:130] > # the hostname is being managed dynamically.
	I0830 21:42:36.208809  978470 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0830 21:42:36.208818  978470 command_runner.go:130] > # ]
	I0830 21:42:36.208829  978470 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0830 21:42:36.208843  978470 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0830 21:42:36.208858  978470 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0830 21:42:36.208872  978470 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0830 21:42:36.208880  978470 command_runner.go:130] > #
	I0830 21:42:36.208889  978470 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0830 21:42:36.208904  978470 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0830 21:42:36.208914  978470 command_runner.go:130] > #  runtime_type = "oci"
	I0830 21:42:36.208924  978470 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0830 21:42:36.208933  978470 command_runner.go:130] > #  privileged_without_host_devices = false
	I0830 21:42:36.208944  978470 command_runner.go:130] > #  allowed_annotations = []
	I0830 21:42:36.208953  978470 command_runner.go:130] > # Where:
	I0830 21:42:36.208963  978470 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0830 21:42:36.208976  978470 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0830 21:42:36.208990  978470 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0830 21:42:36.209005  978470 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0830 21:42:36.209014  978470 command_runner.go:130] > #   in $PATH.
	I0830 21:42:36.209025  978470 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0830 21:42:36.209036  978470 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0830 21:42:36.209051  978470 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0830 21:42:36.209060  978470 command_runner.go:130] > #   state.
	I0830 21:42:36.209072  978470 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0830 21:42:36.209085  978470 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0830 21:42:36.209099  978470 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0830 21:42:36.209116  978470 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0830 21:42:36.209130  978470 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0830 21:42:36.209145  978470 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0830 21:42:36.209156  978470 command_runner.go:130] > #   The currently recognized values are:
	I0830 21:42:36.209170  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0830 21:42:36.209187  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0830 21:42:36.209202  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0830 21:42:36.209216  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0830 21:42:36.209230  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0830 21:42:36.209244  978470 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0830 21:42:36.209258  978470 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0830 21:42:36.209273  978470 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0830 21:42:36.209286  978470 command_runner.go:130] > #   should be moved to the container's cgroup
	I0830 21:42:36.209297  978470 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0830 21:42:36.209307  978470 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0830 21:42:36.209317  978470 command_runner.go:130] > runtime_type = "oci"
	I0830 21:42:36.209327  978470 command_runner.go:130] > runtime_root = "/run/runc"
	I0830 21:42:36.209336  978470 command_runner.go:130] > runtime_config_path = ""
	I0830 21:42:36.209350  978470 command_runner.go:130] > monitor_path = ""
	I0830 21:42:36.209360  978470 command_runner.go:130] > monitor_cgroup = ""
	I0830 21:42:36.209369  978470 command_runner.go:130] > monitor_exec_cgroup = ""
	I0830 21:42:36.209381  978470 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0830 21:42:36.209390  978470 command_runner.go:130] > # running containers
	I0830 21:42:36.209400  978470 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0830 21:42:36.209415  978470 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0830 21:42:36.209480  978470 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0830 21:42:36.209494  978470 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0830 21:42:36.209502  978470 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0830 21:42:36.209511  978470 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0830 21:42:36.209522  978470 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0830 21:42:36.209531  978470 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0830 21:42:36.209543  978470 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0830 21:42:36.209554  978470 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0830 21:42:36.209566  978470 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0830 21:42:36.209579  978470 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0830 21:42:36.209592  978470 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0830 21:42:36.209612  978470 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0830 21:42:36.209628  978470 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0830 21:42:36.209641  978470 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0830 21:42:36.209660  978470 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0830 21:42:36.209681  978470 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0830 21:42:36.209698  978470 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0830 21:42:36.209714  978470 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0830 21:42:36.209723  978470 command_runner.go:130] > # Example:
	I0830 21:42:36.209732  978470 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0830 21:42:36.209744  978470 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0830 21:42:36.209755  978470 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0830 21:42:36.209764  978470 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0830 21:42:36.209773  978470 command_runner.go:130] > # cpuset = 0
	I0830 21:42:36.209781  978470 command_runner.go:130] > # cpushares = "0-1"
	I0830 21:42:36.209789  978470 command_runner.go:130] > # Where:
	I0830 21:42:36.209798  978470 command_runner.go:130] > # The workload name is workload-type.
	I0830 21:42:36.209813  978470 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0830 21:42:36.209826  978470 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0830 21:42:36.209843  978470 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0830 21:42:36.209860  978470 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0830 21:42:36.209873  978470 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0830 21:42:36.209886  978470 command_runner.go:130] > # 
	I0830 21:42:36.209901  978470 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0830 21:42:36.209909  978470 command_runner.go:130] > #
	I0830 21:42:36.209920  978470 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0830 21:42:36.209934  978470 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0830 21:42:36.209948  978470 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0830 21:42:36.209962  978470 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0830 21:42:36.209976  978470 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0830 21:42:36.209985  978470 command_runner.go:130] > [crio.image]
	I0830 21:42:36.209996  978470 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0830 21:42:36.210007  978470 command_runner.go:130] > # default_transport = "docker://"
	I0830 21:42:36.210019  978470 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0830 21:42:36.210033  978470 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0830 21:42:36.210043  978470 command_runner.go:130] > # global_auth_file = ""
	I0830 21:42:36.210055  978470 command_runner.go:130] > # The image used to instantiate infra containers.
	I0830 21:42:36.210074  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:42:36.210085  978470 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0830 21:42:36.210096  978470 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0830 21:42:36.210110  978470 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0830 21:42:36.210122  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:42:36.210133  978470 command_runner.go:130] > # pause_image_auth_file = ""
	I0830 21:42:36.210144  978470 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0830 21:42:36.210161  978470 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0830 21:42:36.210174  978470 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0830 21:42:36.210185  978470 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0830 21:42:36.210196  978470 command_runner.go:130] > # pause_command = "/pause"
	I0830 21:42:36.210210  978470 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0830 21:42:36.210224  978470 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0830 21:42:36.210239  978470 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0830 21:42:36.210253  978470 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0830 21:42:36.210265  978470 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0830 21:42:36.210272  978470 command_runner.go:130] > # signature_policy = ""
	I0830 21:42:36.210286  978470 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0830 21:42:36.210303  978470 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0830 21:42:36.210313  978470 command_runner.go:130] > # changing them here.
	I0830 21:42:36.210324  978470 command_runner.go:130] > # insecure_registries = [
	I0830 21:42:36.210330  978470 command_runner.go:130] > # ]
	I0830 21:42:36.210342  978470 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0830 21:42:36.210349  978470 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0830 21:42:36.210356  978470 command_runner.go:130] > # image_volumes = "mkdir"
	I0830 21:42:36.210366  978470 command_runner.go:130] > # Temporary directory to use for storing big files
	I0830 21:42:36.210375  978470 command_runner.go:130] > # big_files_temporary_dir = ""
	I0830 21:42:36.210386  978470 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0830 21:42:36.210394  978470 command_runner.go:130] > # CNI plugins.
	I0830 21:42:36.210402  978470 command_runner.go:130] > [crio.network]
	I0830 21:42:36.210411  978470 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0830 21:42:36.210420  978470 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0830 21:42:36.210429  978470 command_runner.go:130] > # cni_default_network = ""
	I0830 21:42:36.210442  978470 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0830 21:42:36.210454  978470 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0830 21:42:36.210467  978470 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0830 21:42:36.210483  978470 command_runner.go:130] > # plugin_dirs = [
	I0830 21:42:36.210493  978470 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0830 21:42:36.210501  978470 command_runner.go:130] > # ]
	I0830 21:42:36.210511  978470 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0830 21:42:36.210521  978470 command_runner.go:130] > [crio.metrics]
	I0830 21:42:36.210530  978470 command_runner.go:130] > # Globally enable or disable metrics support.
	I0830 21:42:36.210540  978470 command_runner.go:130] > enable_metrics = true
	I0830 21:42:36.210552  978470 command_runner.go:130] > # Specify enabled metrics collectors.
	I0830 21:42:36.210563  978470 command_runner.go:130] > # Per default all metrics are enabled.
	I0830 21:42:36.210577  978470 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0830 21:42:36.210594  978470 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0830 21:42:36.210608  978470 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0830 21:42:36.210618  978470 command_runner.go:130] > # metrics_collectors = [
	I0830 21:42:36.210626  978470 command_runner.go:130] > # 	"operations",
	I0830 21:42:36.210638  978470 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0830 21:42:36.210646  978470 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0830 21:42:36.210657  978470 command_runner.go:130] > # 	"operations_errors",
	I0830 21:42:36.210666  978470 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0830 21:42:36.210682  978470 command_runner.go:130] > # 	"image_pulls_by_name",
	I0830 21:42:36.210694  978470 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0830 21:42:36.210705  978470 command_runner.go:130] > # 	"image_pulls_failures",
	I0830 21:42:36.210715  978470 command_runner.go:130] > # 	"image_pulls_successes",
	I0830 21:42:36.210726  978470 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0830 21:42:36.210735  978470 command_runner.go:130] > # 	"image_layer_reuse",
	I0830 21:42:36.210745  978470 command_runner.go:130] > # 	"containers_oom_total",
	I0830 21:42:36.210755  978470 command_runner.go:130] > # 	"containers_oom",
	I0830 21:42:36.210762  978470 command_runner.go:130] > # 	"processes_defunct",
	I0830 21:42:36.210770  978470 command_runner.go:130] > # 	"operations_total",
	I0830 21:42:36.210781  978470 command_runner.go:130] > # 	"operations_latency_seconds",
	I0830 21:42:36.210792  978470 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0830 21:42:36.210803  978470 command_runner.go:130] > # 	"operations_errors_total",
	I0830 21:42:36.210813  978470 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0830 21:42:36.210825  978470 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0830 21:42:36.210835  978470 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0830 21:42:36.210845  978470 command_runner.go:130] > # 	"image_pulls_success_total",
	I0830 21:42:36.210853  978470 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0830 21:42:36.210868  978470 command_runner.go:130] > # 	"containers_oom_count_total",
	I0830 21:42:36.210876  978470 command_runner.go:130] > # ]
	I0830 21:42:36.210886  978470 command_runner.go:130] > # The port on which the metrics server will listen.
	I0830 21:42:36.210897  978470 command_runner.go:130] > # metrics_port = 9090
	I0830 21:42:36.210909  978470 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0830 21:42:36.210919  978470 command_runner.go:130] > # metrics_socket = ""
	I0830 21:42:36.210928  978470 command_runner.go:130] > # The certificate for the secure metrics server.
	I0830 21:42:36.210941  978470 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0830 21:42:36.210952  978470 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0830 21:42:36.210964  978470 command_runner.go:130] > # certificate on any modification event.
	I0830 21:42:36.210971  978470 command_runner.go:130] > # metrics_cert = ""
	I0830 21:42:36.210983  978470 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0830 21:42:36.210995  978470 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0830 21:42:36.211005  978470 command_runner.go:130] > # metrics_key = ""
	I0830 21:42:36.211019  978470 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0830 21:42:36.211027  978470 command_runner.go:130] > [crio.tracing]
	I0830 21:42:36.211038  978470 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0830 21:42:36.211048  978470 command_runner.go:130] > # enable_tracing = false
	I0830 21:42:36.211063  978470 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0830 21:42:36.211077  978470 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0830 21:42:36.211090  978470 command_runner.go:130] > # Number of samples to collect per million spans.
	I0830 21:42:36.211101  978470 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0830 21:42:36.211114  978470 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0830 21:42:36.211121  978470 command_runner.go:130] > [crio.stats]
	I0830 21:42:36.211135  978470 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0830 21:42:36.211148  978470 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0830 21:42:36.211159  978470 command_runner.go:130] > # stats_collection_period = 0
	I0830 21:42:36.211260  978470 cni.go:84] Creating CNI manager for ""
	I0830 21:42:36.211274  978470 cni.go:136] 3 nodes found, recommending kindnet
	I0830 21:42:36.211301  978470 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:42:36.211361  978470 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.20 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-752665 NodeName:multinode-752665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 21:42:36.211558  978470 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-752665"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.20
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:42:36.211673  978470 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-752665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 21:42:36.211742  978470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 21:42:36.221702  978470 command_runner.go:130] > kubeadm
	I0830 21:42:36.221719  978470 command_runner.go:130] > kubectl
	I0830 21:42:36.221724  978470 command_runner.go:130] > kubelet
	I0830 21:42:36.221908  978470 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 21:42:36.221973  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 21:42:36.230486  978470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0830 21:42:36.246116  978470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 21:42:36.261499  978470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0830 21:42:36.277848  978470 ssh_runner.go:195] Run: grep 192.168.39.20	control-plane.minikube.internal$ /etc/hosts
	I0830 21:42:36.281649  978470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:42:36.293887  978470 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665 for IP: 192.168.39.20
	I0830 21:42:36.293923  978470 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:42:36.294078  978470 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 21:42:36.294131  978470 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 21:42:36.294234  978470 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key
	I0830 21:42:36.294319  978470 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.key.2e41fa34
	I0830 21:42:36.294389  978470 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.key
	I0830 21:42:36.294403  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0830 21:42:36.294418  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0830 21:42:36.294441  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0830 21:42:36.294463  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0830 21:42:36.294479  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 21:42:36.294495  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 21:42:36.294510  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 21:42:36.294530  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 21:42:36.294591  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 21:42:36.294631  978470 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 21:42:36.294645  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 21:42:36.294677  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 21:42:36.294714  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:42:36.294745  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 21:42:36.294795  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:42:36.294837  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /usr/share/ca-certificates/9626212.pem
	I0830 21:42:36.294855  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:42:36.294869  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem -> /usr/share/ca-certificates/962621.pem
	I0830 21:42:36.295918  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 21:42:36.318907  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 21:42:36.346830  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 21:42:36.374518  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 21:42:36.398425  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:42:36.421697  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:42:36.444756  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:42:36.467782  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 21:42:36.490870  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 21:42:36.513816  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:42:36.537008  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 21:42:36.560152  978470 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 21:42:36.578416  978470 ssh_runner.go:195] Run: openssl version
	I0830 21:42:36.584221  978470 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0830 21:42:36.584546  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 21:42:36.594796  978470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 21:42:36.599651  978470 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:42:36.599987  978470 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:42:36.600068  978470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 21:42:36.605482  978470 command_runner.go:130] > 3ec20f2e
	I0830 21:42:36.605675  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 21:42:36.615136  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:42:36.624992  978470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:42:36.629781  978470 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:42:36.629965  978470 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:42:36.630012  978470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:42:36.635683  978470 command_runner.go:130] > b5213941
	I0830 21:42:36.636003  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:42:36.645906  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 21:42:36.655780  978470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 21:42:36.660365  978470 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:42:36.660536  978470 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:42:36.660583  978470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 21:42:36.666119  978470 command_runner.go:130] > 51391683
	I0830 21:42:36.666542  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 21:42:36.676715  978470 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:42:36.681172  978470 command_runner.go:130] > ca.crt
	I0830 21:42:36.681183  978470 command_runner.go:130] > ca.key
	I0830 21:42:36.681188  978470 command_runner.go:130] > healthcheck-client.crt
	I0830 21:42:36.681193  978470 command_runner.go:130] > healthcheck-client.key
	I0830 21:42:36.681200  978470 command_runner.go:130] > peer.crt
	I0830 21:42:36.681209  978470 command_runner.go:130] > peer.key
	I0830 21:42:36.681216  978470 command_runner.go:130] > server.crt
	I0830 21:42:36.681225  978470 command_runner.go:130] > server.key
	I0830 21:42:36.681435  978470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 21:42:36.686964  978470 command_runner.go:130] > Certificate will not expire
	I0830 21:42:36.687350  978470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 21:42:36.692876  978470 command_runner.go:130] > Certificate will not expire
	I0830 21:42:36.693177  978470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 21:42:36.698661  978470 command_runner.go:130] > Certificate will not expire
	I0830 21:42:36.698920  978470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 21:42:36.704786  978470 command_runner.go:130] > Certificate will not expire
	I0830 21:42:36.705337  978470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 21:42:36.710726  978470 command_runner.go:130] > Certificate will not expire
	I0830 21:42:36.710814  978470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 21:42:36.715884  978470 command_runner.go:130] > Certificate will not expire
	I0830 21:42:36.716310  978470 kubeadm.go:404] StartCluster: {Name:multinode-752665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.30 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:42:36.716420  978470 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 21:42:36.716459  978470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:42:36.753114  978470 cri.go:89] found id: ""
	I0830 21:42:36.753185  978470 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 21:42:36.762028  978470 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0830 21:42:36.762072  978470 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0830 21:42:36.762082  978470 command_runner.go:130] > /var/lib/minikube/etcd:
	I0830 21:42:36.762087  978470 command_runner.go:130] > member
	I0830 21:42:36.762111  978470 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 21:42:36.762156  978470 kubeadm.go:636] restartCluster start
	I0830 21:42:36.762236  978470 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 21:42:36.770523  978470 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:36.771110  978470 kubeconfig.go:92] found "multinode-752665" server: "https://192.168.39.20:8443"
	I0830 21:42:36.771562  978470 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:42:36.771864  978470 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:42:36.772597  978470 cert_rotation.go:137] Starting client certificate rotation controller
	I0830 21:42:36.772960  978470 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 21:42:36.781026  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:36.781080  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:36.791435  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:36.791459  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:36.791502  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:36.801350  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:37.302096  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:37.302186  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:37.313287  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:37.801782  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:37.801873  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:37.813213  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:38.301922  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:38.302033  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:38.313370  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:38.801947  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:38.802044  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:38.812983  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:39.301459  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:39.301608  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:39.313037  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:39.801539  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:39.801624  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:39.814196  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:40.301792  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:40.301880  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:40.314231  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:40.801727  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:40.801848  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:40.813934  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:41.302070  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:41.302156  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:41.313062  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:41.801627  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:41.801782  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:41.812928  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:42.301515  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:42.301618  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:42.312522  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:42.802143  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:42.802247  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:42.813463  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:43.302183  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:43.302296  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:43.313404  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:43.801550  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:43.801650  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:43.812870  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:44.302487  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:44.302578  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:44.313509  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:44.802141  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:44.802246  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:44.813401  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:45.301930  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:45.302023  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:45.313450  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:45.802072  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:45.802189  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:45.813842  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:46.301675  978470 api_server.go:166] Checking apiserver status ...
	I0830 21:42:46.301783  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:42:46.312946  978470 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:42:46.781785  978470 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 21:42:46.781836  978470 kubeadm.go:1128] stopping kube-system containers ...
	I0830 21:42:46.781864  978470 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 21:42:46.781951  978470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:42:46.811931  978470 cri.go:89] found id: ""
	I0830 21:42:46.812023  978470 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 21:42:46.827088  978470 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 21:42:46.835469  978470 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0830 21:42:46.835493  978470 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0830 21:42:46.835504  978470 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0830 21:42:46.835516  978470 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 21:42:46.835550  978470 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 21:42:46.835623  978470 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 21:42:46.843736  978470 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 21:42:46.843779  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:42:46.945458  978470 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 21:42:46.946017  978470 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0830 21:42:46.946661  978470 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0830 21:42:46.947349  978470 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 21:42:46.948141  978470 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0830 21:42:46.948871  978470 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0830 21:42:46.949873  978470 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0830 21:42:46.950571  978470 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0830 21:42:46.951131  978470 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0830 21:42:46.951847  978470 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 21:42:46.952373  978470 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 21:42:46.953020  978470 command_runner.go:130] > [certs] Using the existing "sa" key
	I0830 21:42:46.954459  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:42:47.884913  978470 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 21:42:47.884937  978470 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 21:42:47.884943  978470 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 21:42:47.884948  978470 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 21:42:47.884955  978470 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 21:42:47.884990  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:42:47.948168  978470 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 21:42:47.948880  978470 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 21:42:47.948965  978470 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0830 21:42:48.058038  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:42:48.149888  978470 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 21:42:48.149919  978470 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 21:42:48.149932  978470 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 21:42:48.149955  978470 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 21:42:48.149984  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:42:48.229940  978470 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 21:42:48.229979  978470 api_server.go:52] waiting for apiserver process to appear ...
	I0830 21:42:48.230047  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:42:48.245967  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:42:48.777878  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:42:49.277629  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:42:49.777608  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:42:50.278066  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:42:50.777540  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:42:50.807010  978470 command_runner.go:130] > 1104
	I0830 21:42:50.807074  978470 api_server.go:72] duration metric: took 2.577093065s to wait for apiserver process to appear ...
	I0830 21:42:50.807087  978470 api_server.go:88] waiting for apiserver healthz status ...
	I0830 21:42:50.807109  978470 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:42:50.807889  978470 api_server.go:269] stopped: https://192.168.39.20:8443/healthz: Get "https://192.168.39.20:8443/healthz": dial tcp 192.168.39.20:8443: connect: connection refused
	I0830 21:42:50.807936  978470 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:42:50.808471  978470 api_server.go:269] stopped: https://192.168.39.20:8443/healthz: Get "https://192.168.39.20:8443/healthz": dial tcp 192.168.39.20:8443: connect: connection refused
	I0830 21:42:51.308570  978470 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:42:53.672873  978470 api_server.go:279] https://192.168.39.20:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 21:42:53.672920  978470 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 21:42:53.672942  978470 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:42:53.704921  978470 api_server.go:279] https://192.168.39.20:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 21:42:53.704959  978470 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 21:42:53.809185  978470 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:42:53.822441  978470 api_server.go:279] https://192.168.39.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 21:42:53.822481  978470 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 21:42:54.309039  978470 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:42:54.314310  978470 api_server.go:279] https://192.168.39.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 21:42:54.314391  978470 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 21:42:54.808868  978470 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:42:54.819462  978470 api_server.go:279] https://192.168.39.20:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 21:42:54.819497  978470 api_server.go:103] status: https://192.168.39.20:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 21:42:55.309033  978470 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:42:55.314995  978470 api_server.go:279] https://192.168.39.20:8443/healthz returned 200:
	ok
	I0830 21:42:55.315118  978470 round_trippers.go:463] GET https://192.168.39.20:8443/version
	I0830 21:42:55.315128  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:55.315136  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:55.315150  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:55.325287  978470 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0830 21:42:55.325315  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:55.325327  978470 round_trippers.go:580]     Audit-Id: 9a0f1377-9095-4d01-ad5f-0488d2ae2f78
	I0830 21:42:55.325337  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:55.325345  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:55.325354  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:55.325362  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:55.325380  978470 round_trippers.go:580]     Content-Length: 263
	I0830 21:42:55.325392  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:55 GMT
	I0830 21:42:55.325427  978470 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0830 21:42:55.325544  978470 api_server.go:141] control plane version: v1.28.1
	I0830 21:42:55.325569  978470 api_server.go:131] duration metric: took 4.518475822s to wait for apiserver health ...
	I0830 21:42:55.325581  978470 cni.go:84] Creating CNI manager for ""
	I0830 21:42:55.325592  978470 cni.go:136] 3 nodes found, recommending kindnet
	I0830 21:42:55.327659  978470 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0830 21:42:55.329329  978470 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 21:42:55.341847  978470 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0830 21:42:55.341874  978470 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0830 21:42:55.341886  978470 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0830 21:42:55.341898  978470 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 21:42:55.341920  978470 command_runner.go:130] > Access: 2023-08-30 21:42:23.592476286 +0000
	I0830 21:42:55.341934  978470 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0830 21:42:55.341943  978470 command_runner.go:130] > Change: 2023-08-30 21:42:21.726476286 +0000
	I0830 21:42:55.341950  978470 command_runner.go:130] >  Birth: -
	I0830 21:42:55.342031  978470 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 21:42:55.342048  978470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 21:42:55.377593  978470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 21:42:56.434787  978470 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0830 21:42:56.446353  978470 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0830 21:42:56.450714  978470 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0830 21:42:56.466634  978470 command_runner.go:130] > daemonset.apps/kindnet configured
	I0830 21:42:56.470833  978470 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.093207715s)
	I0830 21:42:56.470889  978470 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 21:42:56.471006  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:42:56.471015  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.471023  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.471029  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.475885  978470 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:42:56.475910  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.475918  978470 round_trippers.go:580]     Audit-Id: 62080d41-9d2a-4df7-8663-9743a1a021ee
	I0830 21:42:56.475923  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.475929  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.475934  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.475940  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.475945  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.477390  978470 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"752"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82593 chars]
	I0830 21:42:56.481471  978470 system_pods.go:59] 12 kube-system pods found
	I0830 21:42:56.481508  978470 system_pods.go:61] "coredns-5dd5756b68-zcppg" [4742270b-6c64-411b-bfb6-8c53211aa106] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 21:42:56.481520  978470 system_pods.go:61] "etcd-multinode-752665" [25e2609d-f391-4e71-823a-c4fe8625092d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 21:42:56.481533  978470 system_pods.go:61] "kindnet-4q5fx" [864ea4a7-8b4f-4690-90a3-a4c50a909f44] Running
	I0830 21:42:56.481544  978470 system_pods.go:61] "kindnet-d4xrz" [db9dcca6-eedf-4c5f-b3e8-785a4689b7ea] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0830 21:42:56.481556  978470 system_pods.go:61] "kindnet-x5kk4" [2fdd77f6-856a-4400-b881-210549c588e2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0830 21:42:56.481570  978470 system_pods.go:61] "kube-apiserver-multinode-752665" [d813d11d-d0ec-4091-a72b-187bd44eabe3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 21:42:56.481579  978470 system_pods.go:61] "kube-controller-manager-multinode-752665" [0391b35f-5177-412c-b7d4-073efb2de36b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 21:42:56.481589  978470 system_pods.go:61] "kube-proxy-5twl5" [ff4250a4-1482-42c0-a523-e97faf806c43] Running
	I0830 21:42:56.481593  978470 system_pods.go:61] "kube-proxy-jwftn" [bfc888c8-7790-4267-a1fc-cab9448e097b] Running
	I0830 21:42:56.481597  978470 system_pods.go:61] "kube-proxy-vltx5" [24ee271e-5778-4d0c-ab2c-77426f2673b3] Running
	I0830 21:42:56.481605  978470 system_pods.go:61] "kube-scheduler-multinode-752665" [4c8a6a98-51b6-4010-9519-add75ab1a7a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 21:42:56.481609  978470 system_pods.go:61] "storage-provisioner" [67db5a8a-290a-40a7-b42e-212d99db812a] Running
	I0830 21:42:56.481616  978470 system_pods.go:74] duration metric: took 10.721909ms to wait for pod list to return data ...
	I0830 21:42:56.481625  978470 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:42:56.481697  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes
	I0830 21:42:56.481705  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.481712  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.481718  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.484196  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:42:56.484208  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.484214  978470 round_trippers.go:580]     Audit-Id: 8220d370-0047-4528-84cc-bec80cb8cfd3
	I0830 21:42:56.484220  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.484225  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.484232  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.484241  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.484259  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.484634  978470 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"752"},"items":[{"metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15249 chars]
	I0830 21:42:56.485898  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:42:56.485930  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:42:56.485943  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:42:56.485957  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:42:56.485963  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:42:56.485976  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:42:56.485990  978470 node_conditions.go:105] duration metric: took 4.359355ms to run NodePressure ...
	I0830 21:42:56.486012  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:42:56.704052  978470 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0830 21:42:56.704083  978470 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0830 21:42:56.704118  978470 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 21:42:56.704255  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0830 21:42:56.704268  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.704279  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.704288  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.707518  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:42:56.707537  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.707557  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.707569  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.707577  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.707588  978470 round_trippers.go:580]     Audit-Id: 2189f7c4-868b-479f-b490-0bee5f896fe7
	I0830 21:42:56.707597  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.707602  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.708722  978470 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"754"},"items":[{"metadata":{"name":"etcd-multinode-752665","namespace":"kube-system","uid":"25e2609d-f391-4e71-823a-c4fe8625092d","resourceVersion":"739","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.20:2379","kubernetes.io/config.hash":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.mirror":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.seen":"2023-08-30T21:32:35.235892298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I0830 21:42:56.710098  978470 kubeadm.go:787] kubelet initialised
	I0830 21:42:56.710118  978470 kubeadm.go:788] duration metric: took 5.989527ms waiting for restarted kubelet to initialise ...
	I0830 21:42:56.710127  978470 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:42:56.710211  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:42:56.710227  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.710239  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.710249  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.713990  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:42:56.714009  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.714016  978470 round_trippers.go:580]     Audit-Id: 0562e973-f3f9-4be9-8fdf-43d2718ea252
	I0830 21:42:56.714021  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.714026  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.714032  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.714037  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.714043  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.715474  978470 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"754"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82593 chars]
	I0830 21:42:56.717963  978470 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:42:56.718051  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:42:56.718062  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.718072  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.718087  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.719994  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:42:56.720012  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.720021  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.720026  978470 round_trippers.go:580]     Audit-Id: acf98896-b3f2-467d-ab0c-6c3e727d2877
	I0830 21:42:56.720032  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.720037  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.720045  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.720054  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.720315  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:42:56.720694  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:56.720708  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.720718  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.720724  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.722460  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:42:56.722474  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.722480  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.722487  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.722496  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.722507  978470 round_trippers.go:580]     Audit-Id: dea3b687-4cf6-4fdf-b584-33d4b4a6ef28
	I0830 21:42:56.722520  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.722532  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.722710  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:42:56.723137  978470 pod_ready.go:97] node "multinode-752665" hosting pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:56.723166  978470 pod_ready.go:81] duration metric: took 5.1833ms waiting for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	E0830 21:42:56.723181  978470 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-752665" hosting pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:56.723189  978470 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:42:56.723270  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-752665
	I0830 21:42:56.723283  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.723296  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.723306  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.725095  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:42:56.725108  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.725115  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.725121  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.725129  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.725139  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.725149  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.725166  978470 round_trippers.go:580]     Audit-Id: 74159a4d-5bdc-45c9-ae37-f110b0827959
	I0830 21:42:56.725300  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-752665","namespace":"kube-system","uid":"25e2609d-f391-4e71-823a-c4fe8625092d","resourceVersion":"739","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.20:2379","kubernetes.io/config.hash":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.mirror":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.seen":"2023-08-30T21:32:35.235892298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0830 21:42:56.725629  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:56.725642  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.725648  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.725654  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.727292  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:42:56.727307  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.727316  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.727333  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.727347  978470 round_trippers.go:580]     Audit-Id: d2b5427e-b80f-4c89-938a-60be8646cbc3
	I0830 21:42:56.727355  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.727360  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.727366  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.727579  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:42:56.727989  978470 pod_ready.go:97] node "multinode-752665" hosting pod "etcd-multinode-752665" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:56.728013  978470 pod_ready.go:81] duration metric: took 4.81269ms waiting for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	E0830 21:42:56.728023  978470 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-752665" hosting pod "etcd-multinode-752665" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:56.728057  978470 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:42:56.728115  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-752665
	I0830 21:42:56.728124  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.728135  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.728144  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.729850  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:42:56.729866  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.729875  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.729882  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.729894  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.729910  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.729919  978470 round_trippers.go:580]     Audit-Id: 4021be76-0d28-40d0-bc8f-1b02f733de50
	I0830 21:42:56.729929  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.730227  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-752665","namespace":"kube-system","uid":"d813d11d-d0ec-4091-a72b-187bd44eabe3","resourceVersion":"741","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.20:8443","kubernetes.io/config.hash":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.mirror":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.seen":"2023-08-30T21:32:26.214498990Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0830 21:42:56.730673  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:56.730689  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.730696  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.730702  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.732366  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:42:56.732384  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.732393  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.732401  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.732410  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.732419  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.732427  978470 round_trippers.go:580]     Audit-Id: 7ab5e0f7-2f11-4940-889b-564208afe548
	I0830 21:42:56.732436  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.732614  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:42:56.732879  978470 pod_ready.go:97] node "multinode-752665" hosting pod "kube-apiserver-multinode-752665" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:56.732895  978470 pod_ready.go:81] duration metric: took 4.828335ms waiting for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	E0830 21:42:56.732901  978470 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-752665" hosting pod "kube-apiserver-multinode-752665" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:56.732910  978470 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:42:56.732957  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-752665
	I0830 21:42:56.732980  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.732986  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.732993  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.734869  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:42:56.734886  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.734896  978470 round_trippers.go:580]     Audit-Id: ecba2426-180b-4df1-9fb4-a96bf8b2f35f
	I0830 21:42:56.734914  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.734922  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.734931  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.734941  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.734955  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.735127  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-752665","namespace":"kube-system","uid":"0391b35f-5177-412c-b7d4-073efb2de36b","resourceVersion":"740","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.mirror":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.seen":"2023-08-30T21:32:26.214500244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0830 21:42:56.871536  978470 request.go:629] Waited for 136.046432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:56.871619  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:56.871625  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:56.871636  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:56.871646  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:56.874650  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:42:56.874673  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:56.874680  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:56.874689  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:56.874695  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:56 GMT
	I0830 21:42:56.874700  978470 round_trippers.go:580]     Audit-Id: 47824c39-8fc5-4add-b736-df31e1d9042d
	I0830 21:42:56.874705  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:56.874711  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:56.875029  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:42:56.875366  978470 pod_ready.go:97] node "multinode-752665" hosting pod "kube-controller-manager-multinode-752665" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:56.875390  978470 pod_ready.go:81] duration metric: took 142.461255ms waiting for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	E0830 21:42:56.875400  978470 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-752665" hosting pod "kube-controller-manager-multinode-752665" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:56.875410  978470 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5twl5" in "kube-system" namespace to be "Ready" ...
	I0830 21:42:57.071855  978470 request.go:629] Waited for 196.362468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:42:57.071951  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:42:57.071959  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:57.071967  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:57.071973  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:57.074790  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:42:57.074814  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:57.074823  978470 round_trippers.go:580]     Audit-Id: 5d5ff838-bca0-4c64-be91-c1932d5688e9
	I0830 21:42:57.074836  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:57.074843  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:57.074850  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:57.074858  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:57.074865  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:57 GMT
	I0830 21:42:57.075040  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5twl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff4250a4-1482-42c0-a523-e97faf806c43","resourceVersion":"477","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0830 21:42:57.271857  978470 request.go:629] Waited for 196.36302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:42:57.271970  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:42:57.271979  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:57.271992  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:57.272009  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:57.275518  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:42:57.275538  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:57.275545  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:57.275550  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:57.275555  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:57.275565  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:57.275573  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:57 GMT
	I0830 21:42:57.275580  978470 round_trippers.go:580]     Audit-Id: 0a51ff21-f3d5-464f-8d63-e81c9e5d802d
	I0830 21:42:57.275960  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"738","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I0830 21:42:57.276277  978470 pod_ready.go:92] pod "kube-proxy-5twl5" in "kube-system" namespace has status "Ready":"True"
	I0830 21:42:57.276294  978470 pod_ready.go:81] duration metric: took 400.871683ms waiting for pod "kube-proxy-5twl5" in "kube-system" namespace to be "Ready" ...
	I0830 21:42:57.276307  978470 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jwftn" in "kube-system" namespace to be "Ready" ...
	I0830 21:42:57.471702  978470 request.go:629] Waited for 195.320732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwftn
	I0830 21:42:57.471798  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwftn
	I0830 21:42:57.471806  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:57.471816  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:57.471825  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:57.474805  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:42:57.474829  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:57.474840  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:57 GMT
	I0830 21:42:57.474849  978470 round_trippers.go:580]     Audit-Id: 69116803-770e-4ecd-a179-e2d3ddf1100c
	I0830 21:42:57.474858  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:57.474866  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:57.474878  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:57.474887  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:57.475003  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jwftn","generateName":"kube-proxy-","namespace":"kube-system","uid":"bfc888c8-7790-4267-a1fc-cab9448e097b","resourceVersion":"675","creationTimestamp":"2023-08-30T21:34:21Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:34:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0830 21:42:57.671881  978470 request.go:629] Waited for 196.343286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m03
	I0830 21:42:57.671954  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m03
	I0830 21:42:57.671960  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:57.671971  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:57.671983  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:57.674564  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:42:57.674591  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:57.674602  978470 round_trippers.go:580]     Audit-Id: 5ddb8e36-211d-44c0-98a0-4f3188ad65f4
	I0830 21:42:57.674611  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:57.674623  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:57.674642  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:57.674650  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:57.674660  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:57 GMT
	I0830 21:42:57.674954  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m03","uid":"2c7759fc-7c08-4ea2-b0c4-b56d98a23e6f","resourceVersion":"748","creationTimestamp":"2023-08-30T21:35:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:35:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3412 chars]
	I0830 21:42:57.675236  978470 pod_ready.go:92] pod "kube-proxy-jwftn" in "kube-system" namespace has status "Ready":"True"
	I0830 21:42:57.675250  978470 pod_ready.go:81] duration metric: took 398.937416ms waiting for pod "kube-proxy-jwftn" in "kube-system" namespace to be "Ready" ...
	I0830 21:42:57.675260  978470 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:42:57.871545  978470 request.go:629] Waited for 196.199866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:42:57.871655  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:42:57.871667  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:57.871679  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:57.871690  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:57.874144  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:42:57.874171  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:57.874182  978470 round_trippers.go:580]     Audit-Id: d8820ab9-c7f9-429a-91f6-968513319b95
	I0830 21:42:57.874219  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:57.874228  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:57.874239  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:57.874248  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:57.874260  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:57 GMT
	I0830 21:42:57.874502  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vltx5","generateName":"kube-proxy-","namespace":"kube-system","uid":"24ee271e-5778-4d0c-ab2c-77426f2673b3","resourceVersion":"752","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0830 21:42:58.071839  978470 request.go:629] Waited for 196.70781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:58.071899  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:58.071905  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:58.071916  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:58.071937  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:58.075810  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:42:58.075838  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:58.075850  978470 round_trippers.go:580]     Audit-Id: 0b5becc7-ecca-464c-96f9-be499939572c
	I0830 21:42:58.075858  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:58.075867  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:58.075876  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:58.075884  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:58.075895  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:58 GMT
	I0830 21:42:58.076129  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:42:58.076602  978470 pod_ready.go:97] node "multinode-752665" hosting pod "kube-proxy-vltx5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:58.076634  978470 pod_ready.go:81] duration metric: took 401.36692ms waiting for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	E0830 21:42:58.076645  978470 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-752665" hosting pod "kube-proxy-vltx5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:58.076659  978470 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:42:58.271038  978470 request.go:629] Waited for 194.298986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:42:58.271132  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:42:58.271146  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:58.271158  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:58.271184  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:58.275065  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:42:58.275092  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:58.275101  978470 round_trippers.go:580]     Audit-Id: 62e8ba0c-5fa9-46f7-931a-bb741fce8d40
	I0830 21:42:58.275109  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:58.275120  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:58.275128  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:58.275143  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:58.275157  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:58 GMT
	I0830 21:42:58.275370  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-752665","namespace":"kube-system","uid":"4c8a6a98-51b6-4010-9519-add75ab1a7a9","resourceVersion":"742","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.mirror":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.seen":"2023-08-30T21:32:35.235897289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I0830 21:42:58.471231  978470 request.go:629] Waited for 195.27786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:58.471318  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:58.471329  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:58.471341  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:58.471350  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:58.476178  978470 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:42:58.476206  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:58.476217  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:58.476228  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:58.476236  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:58.476244  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:58.476252  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:58 GMT
	I0830 21:42:58.476263  978470 round_trippers.go:580]     Audit-Id: de04c71e-2072-4ddd-bd3f-540e740f1b9d
	I0830 21:42:58.476479  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:42:58.476981  978470 pod_ready.go:97] node "multinode-752665" hosting pod "kube-scheduler-multinode-752665" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:58.477012  978470 pod_ready.go:81] duration metric: took 400.344806ms waiting for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	E0830 21:42:58.477027  978470 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-752665" hosting pod "kube-scheduler-multinode-752665" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-752665" has status "Ready":"False"
	I0830 21:42:58.477040  978470 pod_ready.go:38] duration metric: took 1.766900977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:42:58.477065  978470 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 21:42:58.489644  978470 command_runner.go:130] > -16
	I0830 21:42:58.489912  978470 ops.go:34] apiserver oom_adj: -16
	I0830 21:42:58.489927  978470 kubeadm.go:640] restartCluster took 21.727749409s
	I0830 21:42:58.489936  978470 kubeadm.go:406] StartCluster complete in 21.773637317s
	I0830 21:42:58.489967  978470 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:42:58.490061  978470 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:42:58.490932  978470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:42:58.491203  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 21:42:58.491209  978470 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0830 21:42:58.493407  978470 out.go:177] * Enabled addons: 
	I0830 21:42:58.491443  978470 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:42:58.491481  978470 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:42:58.494830  978470 addons.go:502] enable addons completed in 3.650808ms: enabled=[]
	I0830 21:42:58.495231  978470 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:42:58.495729  978470 round_trippers.go:463] GET https://192.168.39.20:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 21:42:58.495746  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:58.495758  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:58.495787  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:58.498580  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:42:58.498603  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:58.498614  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:58.498624  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:58.498636  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:58.498645  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:58.498657  978470 round_trippers.go:580]     Content-Length: 291
	I0830 21:42:58.498668  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:58 GMT
	I0830 21:42:58.498681  978470 round_trippers.go:580]     Audit-Id: 89baee00-0894-4143-be2d-676d442e473c
	I0830 21:42:58.498728  978470 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4cda7228-5995-4a40-902e-7c8e87f8c72e","resourceVersion":"753","creationTimestamp":"2023-08-30T21:32:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0830 21:42:58.498930  978470 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-752665" context rescaled to 1 replicas
	I0830 21:42:58.499052  978470 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:42:58.500816  978470 out.go:177] * Verifying Kubernetes components...
	I0830 21:42:58.502191  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:42:58.589703  978470 command_runner.go:130] > apiVersion: v1
	I0830 21:42:58.589727  978470 command_runner.go:130] > data:
	I0830 21:42:58.589732  978470 command_runner.go:130] >   Corefile: |
	I0830 21:42:58.589736  978470 command_runner.go:130] >     .:53 {
	I0830 21:42:58.589739  978470 command_runner.go:130] >         log
	I0830 21:42:58.589744  978470 command_runner.go:130] >         errors
	I0830 21:42:58.589748  978470 command_runner.go:130] >         health {
	I0830 21:42:58.589752  978470 command_runner.go:130] >            lameduck 5s
	I0830 21:42:58.589757  978470 command_runner.go:130] >         }
	I0830 21:42:58.589761  978470 command_runner.go:130] >         ready
	I0830 21:42:58.589767  978470 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0830 21:42:58.589774  978470 command_runner.go:130] >            pods insecure
	I0830 21:42:58.589781  978470 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0830 21:42:58.589785  978470 command_runner.go:130] >            ttl 30
	I0830 21:42:58.589791  978470 command_runner.go:130] >         }
	I0830 21:42:58.589807  978470 command_runner.go:130] >         prometheus :9153
	I0830 21:42:58.589814  978470 command_runner.go:130] >         hosts {
	I0830 21:42:58.589818  978470 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0830 21:42:58.589823  978470 command_runner.go:130] >            fallthrough
	I0830 21:42:58.589826  978470 command_runner.go:130] >         }
	I0830 21:42:58.589831  978470 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0830 21:42:58.589836  978470 command_runner.go:130] >            max_concurrent 1000
	I0830 21:42:58.589840  978470 command_runner.go:130] >         }
	I0830 21:42:58.589844  978470 command_runner.go:130] >         cache 30
	I0830 21:42:58.589849  978470 command_runner.go:130] >         loop
	I0830 21:42:58.589853  978470 command_runner.go:130] >         reload
	I0830 21:42:58.589857  978470 command_runner.go:130] >         loadbalance
	I0830 21:42:58.589861  978470 command_runner.go:130] >     }
	I0830 21:42:58.589864  978470 command_runner.go:130] > kind: ConfigMap
	I0830 21:42:58.589872  978470 command_runner.go:130] > metadata:
	I0830 21:42:58.589879  978470 command_runner.go:130] >   creationTimestamp: "2023-08-30T21:32:35Z"
	I0830 21:42:58.589883  978470 command_runner.go:130] >   name: coredns
	I0830 21:42:58.589889  978470 command_runner.go:130] >   namespace: kube-system
	I0830 21:42:58.589893  978470 command_runner.go:130] >   resourceVersion: "362"
	I0830 21:42:58.589897  978470 command_runner.go:130] >   uid: 27acb354-b614-4ab9-9a76-162f2b2cdad9
	I0830 21:42:58.593584  978470 node_ready.go:35] waiting up to 6m0s for node "multinode-752665" to be "Ready" ...
	I0830 21:42:58.593751  978470 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0830 21:42:58.671935  978470 request.go:629] Waited for 78.238877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:58.672056  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:58.672069  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:58.672082  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:58.672093  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:58.675232  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:42:58.675250  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:58.675256  978470 round_trippers.go:580]     Audit-Id: 58999e5e-5a1f-422b-ba99-30de49283d25
	I0830 21:42:58.675262  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:58.675267  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:58.675274  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:58.675280  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:58.675285  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:58 GMT
	I0830 21:42:58.675493  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:42:58.871200  978470 request.go:629] Waited for 195.274538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:58.871284  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:58.871291  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:58.871303  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:58.871313  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:58.873640  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:42:58.873663  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:58.873681  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:58.873691  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:58 GMT
	I0830 21:42:58.873699  978470 round_trippers.go:580]     Audit-Id: bfcb4f05-b990-4584-8426-728c8b2bac11
	I0830 21:42:58.873705  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:58.873713  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:58.873719  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:58.873842  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:42:59.375051  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:59.375096  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:59.375109  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:59.375117  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:59.377963  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:42:59.377984  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:59.377991  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:59.378000  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:59.378009  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:59.378021  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:59.378030  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:59 GMT
	I0830 21:42:59.378044  978470 round_trippers.go:580]     Audit-Id: 9a187cb8-1a44-4a88-b2b3-d841bd25f424
	I0830 21:42:59.378213  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:42:59.874839  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:42:59.874868  978470 round_trippers.go:469] Request Headers:
	I0830 21:42:59.874880  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:42:59.874888  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:42:59.877606  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:42:59.877632  978470 round_trippers.go:577] Response Headers:
	I0830 21:42:59.877658  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:42:59.877668  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:42:59.877697  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:42:59 GMT
	I0830 21:42:59.877708  978470 round_trippers.go:580]     Audit-Id: 0dee6a5c-ec4e-4db5-bae2-a6a5fe4a4842
	I0830 21:42:59.877719  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:42:59.877729  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:42:59.878087  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:43:00.374726  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:00.374760  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:00.374772  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:00.374780  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:00.377344  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:00.377371  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:00.377428  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:00.377440  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:00.377445  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:00.377450  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:00 GMT
	I0830 21:43:00.377459  978470 round_trippers.go:580]     Audit-Id: e5939062-d244-4592-b1ab-23011fb024b5
	I0830 21:43:00.377469  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:00.377641  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:43:00.874778  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:00.874801  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:00.874810  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:00.874816  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:00.878670  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:00.878691  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:00.878701  978470 round_trippers.go:580]     Audit-Id: fb65047b-64e1-42da-956b-d359cfa9fe78
	I0830 21:43:00.878710  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:00.878718  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:00.878727  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:00.878738  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:00.878750  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:00 GMT
	I0830 21:43:00.878886  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:43:00.879387  978470 node_ready.go:58] node "multinode-752665" has status "Ready":"False"
	I0830 21:43:01.374462  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:01.374486  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:01.374500  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:01.374506  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:01.377247  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:01.377267  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:01.377274  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:01 GMT
	I0830 21:43:01.377280  978470 round_trippers.go:580]     Audit-Id: 82ecda18-208e-439d-adf7-2d2a04931e12
	I0830 21:43:01.377288  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:01.377298  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:01.377305  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:01.377316  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:01.377459  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:43:01.875124  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:01.875151  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:01.875181  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:01.875187  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:01.877611  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:01.877633  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:01.877641  978470 round_trippers.go:580]     Audit-Id: 495011d9-e213-4aba-816d-f67c7cba57f5
	I0830 21:43:01.877649  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:01.877654  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:01.877668  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:01.877676  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:01.877689  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:01 GMT
	I0830 21:43:01.877981  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:43:02.374562  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:02.374587  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:02.374599  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:02.374607  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:02.378252  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:02.378276  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:02.378286  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:02.378294  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:02.378302  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:02.378309  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:02.378317  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:02 GMT
	I0830 21:43:02.378325  978470 round_trippers.go:580]     Audit-Id: b2296279-1465-4035-a8ba-1ffd1e6e4350
	I0830 21:43:02.378661  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:43:02.874687  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:02.874718  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:02.874732  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:02.874743  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:02.877416  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:02.877436  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:02.877443  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:02.877449  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:02.877455  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:02 GMT
	I0830 21:43:02.877460  978470 round_trippers.go:580]     Audit-Id: 6b654f3b-231d-4a5b-a94f-6203f1e75d47
	I0830 21:43:02.877471  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:02.877477  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:02.877723  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:43:03.375402  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:03.375426  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:03.375437  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:03.375443  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:03.378698  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:03.378724  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:03.378731  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:03.378737  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:03.378743  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:03.378751  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:03.378761  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:03 GMT
	I0830 21:43:03.378774  978470 round_trippers.go:580]     Audit-Id: 12916af9-bcfe-41fb-9b1b-84a5f8914ced
	I0830 21:43:03.378922  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:43:03.379336  978470 node_ready.go:58] node "multinode-752665" has status "Ready":"False"
	I0830 21:43:03.874534  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:03.874562  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:03.874574  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:03.874582  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:03.877202  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:03.877228  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:03.877243  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:03 GMT
	I0830 21:43:03.877252  978470 round_trippers.go:580]     Audit-Id: 6f4c4643-a4e1-48dd-8eb7-f5191c154dfa
	I0830 21:43:03.877262  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:03.877270  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:03.877285  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:03.877293  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:03.877526  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"703","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0830 21:43:04.375238  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:04.375268  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:04.375282  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:04.375292  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:04.378769  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:04.378795  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:04.378806  978470 round_trippers.go:580]     Audit-Id: 3bed7585-3669-4aab-8b86-ced447fb9cc8
	I0830 21:43:04.378816  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:04.378826  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:04.378838  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:04.378850  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:04.378863  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:04 GMT
	I0830 21:43:04.379056  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:04.379492  978470 node_ready.go:49] node "multinode-752665" has status "Ready":"True"
	I0830 21:43:04.379510  978470 node_ready.go:38] duration metric: took 5.785892393s waiting for node "multinode-752665" to be "Ready" ...
	I0830 21:43:04.379520  978470 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:43:04.379600  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:43:04.379611  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:04.379622  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:04.379633  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:04.383188  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:04.383207  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:04.383216  978470 round_trippers.go:580]     Audit-Id: 44c98aa2-15b4-40b6-81da-1ea5433d8325
	I0830 21:43:04.383230  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:04.383239  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:04.383254  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:04.383263  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:04.383274  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:04 GMT
	I0830 21:43:04.384389  978470 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"828"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82547 chars]
	I0830 21:43:04.388157  978470 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:04.388249  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:04.388259  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:04.388270  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:04.388288  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:04.391420  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:04.391437  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:04.391446  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:04.391454  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:04.391461  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:04.391469  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:04.391477  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:04 GMT
	I0830 21:43:04.391490  978470 round_trippers.go:580]     Audit-Id: c6cbc7d3-8723-4616-96e5-449137655bff
	I0830 21:43:04.391748  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:04.392280  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:04.392297  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:04.392308  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:04.392321  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:04.394382  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:04.394397  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:04.394414  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:04.394428  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:04.394441  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:04 GMT
	I0830 21:43:04.394456  978470 round_trippers.go:580]     Audit-Id: 81a54b4d-253d-4ce6-ab16-215810de12fb
	I0830 21:43:04.394465  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:04.394477  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:04.394735  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:04.395039  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:04.395051  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:04.395061  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:04.395070  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:04.396893  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:43:04.396912  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:04.396922  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:04.396931  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:04.396945  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:04.396951  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:04.396956  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:04 GMT
	I0830 21:43:04.396961  978470 round_trippers.go:580]     Audit-Id: 1757f50c-8dbd-491a-8c9c-ccb958bf120b
	I0830 21:43:04.397099  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:04.397574  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:04.397596  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:04.397608  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:04.397622  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:04.399354  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:43:04.399370  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:04.399380  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:04 GMT
	I0830 21:43:04.399388  978470 round_trippers.go:580]     Audit-Id: 06f1481b-38f9-4ab8-97b2-c1eb2e050401
	I0830 21:43:04.399395  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:04.399403  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:04.399412  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:04.399422  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:04.399712  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:04.900948  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:04.900975  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:04.900985  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:04.900993  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:04.904350  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:04.904376  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:04.904389  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:04.904397  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:04 GMT
	I0830 21:43:04.904404  978470 round_trippers.go:580]     Audit-Id: a3d9dc25-3d96-4e7f-b220-2d5256ff481f
	I0830 21:43:04.904412  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:04.904420  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:04.904428  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:04.904691  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:04.905174  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:04.905189  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:04.905199  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:04.905208  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:04.908066  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:04.908088  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:04.908098  978470 round_trippers.go:580]     Audit-Id: da7f0336-32c8-4096-9177-03784ede24e5
	I0830 21:43:04.908107  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:04.908115  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:04.908129  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:04.908138  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:04.908147  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:04 GMT
	I0830 21:43:04.908287  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:05.401004  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:05.401030  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:05.401041  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:05.401049  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:05.403892  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:05.403922  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:05.403932  978470 round_trippers.go:580]     Audit-Id: fd4425ee-d43a-42f0-94e2-47d834c8c717
	I0830 21:43:05.403941  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:05.403949  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:05.403957  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:05.403965  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:05.403976  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:05 GMT
	I0830 21:43:05.404410  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:05.404903  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:05.404919  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:05.404927  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:05.404934  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:05.407199  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:05.407219  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:05.407229  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:05 GMT
	I0830 21:43:05.407237  978470 round_trippers.go:580]     Audit-Id: ed5f3632-1f82-4df8-9caf-785c6a43e8b0
	I0830 21:43:05.407245  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:05.407253  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:05.407264  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:05.407273  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:05.407426  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:05.900550  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:05.900579  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:05.900592  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:05.900601  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:05.903503  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:05.903530  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:05.903540  978470 round_trippers.go:580]     Audit-Id: 056928f6-01c5-4d33-b1a9-42d63e7e8bcc
	I0830 21:43:05.903550  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:05.903559  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:05.903569  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:05.903577  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:05.903586  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:05 GMT
	I0830 21:43:05.903751  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:05.904314  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:05.904329  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:05.904336  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:05.904342  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:05.906468  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:05.906489  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:05.906499  978470 round_trippers.go:580]     Audit-Id: 96f4d252-f918-4aec-933f-6d7d63a7521e
	I0830 21:43:05.906508  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:05.906521  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:05.906529  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:05.906542  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:05.906552  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:05 GMT
	I0830 21:43:05.906682  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:06.400436  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:06.400473  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:06.400492  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:06.400502  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:06.403575  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:06.403601  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:06.403611  978470 round_trippers.go:580]     Audit-Id: be275f4b-ea75-4ea1-a33d-106b59da96c4
	I0830 21:43:06.403621  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:06.403633  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:06.403641  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:06.403652  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:06.403668  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:06 GMT
	I0830 21:43:06.404265  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:06.404837  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:06.404853  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:06.404861  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:06.404867  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:06.406865  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:43:06.406892  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:06.406901  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:06.406910  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:06.406918  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:06.406927  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:06 GMT
	I0830 21:43:06.406940  978470 round_trippers.go:580]     Audit-Id: 5ed686de-7b1a-4323-8a2a-c3c35474a316
	I0830 21:43:06.406948  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:06.407252  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:06.407607  978470 pod_ready.go:102] pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:43:06.901178  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:06.901202  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:06.901211  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:06.901217  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:06.903982  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:06.904014  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:06.904026  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:06.904035  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:06 GMT
	I0830 21:43:06.904048  978470 round_trippers.go:580]     Audit-Id: 5fdf53c4-c70c-457f-86de-37eb60c443d7
	I0830 21:43:06.904055  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:06.904064  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:06.904074  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:06.904329  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:06.904972  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:06.904994  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:06.905006  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:06.905017  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:06.907230  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:06.907252  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:06.907262  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:06.907270  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:06.907279  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:06.907286  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:06 GMT
	I0830 21:43:06.907295  978470 round_trippers.go:580]     Audit-Id: a68cf045-a7c9-4816-80d6-2fe850842864
	I0830 21:43:06.907304  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:06.907452  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:07.401184  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:07.401213  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:07.401225  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:07.401232  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:07.403741  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:07.403761  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:07.403787  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:07.403795  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:07.403804  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:07.403815  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:07.403823  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:07 GMT
	I0830 21:43:07.403832  978470 round_trippers.go:580]     Audit-Id: 36bbb527-8ff7-4039-a952-2e7235278efb
	I0830 21:43:07.403989  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:07.404565  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:07.404587  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:07.404598  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:07.404605  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:07.406878  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:07.406894  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:07.406903  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:07.406911  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:07 GMT
	I0830 21:43:07.406918  978470 round_trippers.go:580]     Audit-Id: 19cfe96a-6124-4d98-a7df-815b1aa98b24
	I0830 21:43:07.406937  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:07.406946  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:07.406955  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:07.407247  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:07.901177  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:07.901202  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:07.901210  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:07.901216  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:07.904437  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:07.904457  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:07.904465  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:07.904471  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:07.904484  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:07.904494  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:07.904503  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:07 GMT
	I0830 21:43:07.904513  978470 round_trippers.go:580]     Audit-Id: 33cfee6b-7010-4e4f-824d-58c760763a09
	I0830 21:43:07.904710  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:07.905182  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:07.905198  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:07.905205  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:07.905211  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:07.907189  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:43:07.907200  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:07.907206  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:07.907211  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:07 GMT
	I0830 21:43:07.907217  978470 round_trippers.go:580]     Audit-Id: f5114e2a-8a44-44da-96c4-1fb8c76c2713
	I0830 21:43:07.907225  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:07.907234  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:07.907243  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:07.907650  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:08.401385  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:08.401412  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:08.401420  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:08.401426  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:08.404308  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:08.404334  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:08.404354  978470 round_trippers.go:580]     Audit-Id: d0f7dc46-0fd6-48a2-b59b-18b90c027795
	I0830 21:43:08.404363  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:08.404373  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:08.404383  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:08.404397  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:08.404408  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:08 GMT
	I0830 21:43:08.405020  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:08.405664  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:08.405684  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:08.405696  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:08.405706  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:08.408085  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:08.408106  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:08.408116  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:08.408125  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:08.408133  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:08 GMT
	I0830 21:43:08.408142  978470 round_trippers.go:580]     Audit-Id: f11b86e5-637c-4841-9f3f-c4acefe022da
	I0830 21:43:08.408154  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:08.408171  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:08.408658  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:08.409063  978470 pod_ready.go:102] pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:43:08.900358  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:08.900388  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:08.900401  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:08.900411  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:08.903826  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:08.903853  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:08.903864  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:08.903873  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:08.903883  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:08.903891  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:08.903901  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:08 GMT
	I0830 21:43:08.903909  978470 round_trippers.go:580]     Audit-Id: 77d5ef60-2568-4f44-938b-ad32c8a83d63
	I0830 21:43:08.904220  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:08.904866  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:08.904887  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:08.904899  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:08.904908  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:08.907630  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:08.907650  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:08.907660  978470 round_trippers.go:580]     Audit-Id: 8396ea16-b7b8-43b4-9a91-1dcd08137443
	I0830 21:43:08.907670  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:08.907679  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:08.907698  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:08.907705  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:08.907712  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:08 GMT
	I0830 21:43:08.908051  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:09.400723  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:09.400750  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:09.400758  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:09.400764  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:09.403790  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:09.403816  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:09.403826  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:09.403834  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:09.403843  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:09.403852  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:09.403865  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:09 GMT
	I0830 21:43:09.403875  978470 round_trippers.go:580]     Audit-Id: 8277cc36-415b-454e-8500-6caeb4fa8f85
	I0830 21:43:09.404144  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:09.404807  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:09.404831  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:09.404843  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:09.404854  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:09.407184  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:09.407201  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:09.407210  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:09.407218  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:09.407226  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:09.407239  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:09 GMT
	I0830 21:43:09.407249  978470 round_trippers.go:580]     Audit-Id: 42648166-17a4-49d4-b45b-2ade9dc928f8
	I0830 21:43:09.407262  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:09.407565  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:09.901281  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:09.901311  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:09.901325  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:09.901334  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:09.904362  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:09.904383  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:09.904390  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:09.904396  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:09.904403  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:09.904411  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:09.904422  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:09 GMT
	I0830 21:43:09.904431  978470 round_trippers.go:580]     Audit-Id: af4da0b2-b8ae-474a-a094-42321b857d48
	I0830 21:43:09.904625  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:09.905092  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:09.905106  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:09.905115  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:09.905124  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:09.907194  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:09.907210  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:09.907220  978470 round_trippers.go:580]     Audit-Id: 4ac227bd-90fe-465d-9298-b1d097aef167
	I0830 21:43:09.907240  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:09.907259  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:09.907265  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:09.907271  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:09.907276  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:09 GMT
	I0830 21:43:09.907437  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:10.401038  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:10.401063  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:10.401072  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:10.401078  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:10.404932  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:10.404956  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:10.404965  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:10.404973  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:10.404982  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:10 GMT
	I0830 21:43:10.404990  978470 round_trippers.go:580]     Audit-Id: 7d338171-5690-433e-85fa-99789682826b
	I0830 21:43:10.404999  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:10.405008  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:10.405257  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:10.405769  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:10.405789  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:10.405799  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:10.405814  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:10.416330  978470 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0830 21:43:10.416347  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:10.416357  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:10.416365  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:10.416371  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:10.416379  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:10.416391  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:10 GMT
	I0830 21:43:10.416406  978470 round_trippers.go:580]     Audit-Id: ef811081-d3ed-46c9-9940-84014dc9b17b
	I0830 21:43:10.416759  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:10.417088  978470 pod_ready.go:102] pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace has status "Ready":"False"
	I0830 21:43:10.900983  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:10.901008  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:10.901025  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:10.901034  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:10.910167  978470 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0830 21:43:10.910192  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:10.910200  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:10.910206  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:10.910211  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:10.910217  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:10.910222  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:10 GMT
	I0830 21:43:10.910228  978470 round_trippers.go:580]     Audit-Id: eb734334-2d00-4c77-a8a5-79ce6ff26223
	I0830 21:43:10.910866  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:10.911452  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:10.911471  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:10.911489  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:10.911506  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:10.914308  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:10.914331  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:10.914342  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:10.914350  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:10.914359  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:10.914367  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:10.914376  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:10 GMT
	I0830 21:43:10.914385  978470 round_trippers.go:580]     Audit-Id: 8782a8f4-64a5-4fca-aede-d7ac5d788c5f
	I0830 21:43:10.914596  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:11.401316  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:11.401344  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.401353  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.401359  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.404068  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:11.404090  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.404097  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.404109  978470 round_trippers.go:580]     Audit-Id: 8d943c87-2138-4e46-96ef-0c9eee24e58b
	I0830 21:43:11.404115  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.404120  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.404126  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.404135  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.404350  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"745","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0830 21:43:11.404840  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:11.404857  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.404864  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.404870  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.406823  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:43:11.406837  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.406843  978470 round_trippers.go:580]     Audit-Id: fbb79f83-b3fb-4d41-a3bc-de085fc9786c
	I0830 21:43:11.406848  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.406854  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.406859  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.406878  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.406891  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.407139  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:11.900814  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:43:11.900839  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.900847  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.900853  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.903561  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:11.903580  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.903588  978470 round_trippers.go:580]     Audit-Id: fa8541d1-4189-4860-9aca-a9cfcb015a41
	I0830 21:43:11.903598  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.903606  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.903613  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.903621  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.903629  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.904039  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"854","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0830 21:43:11.904588  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:11.904607  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.904618  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.904626  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.906877  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:11.906893  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.906899  978470 round_trippers.go:580]     Audit-Id: 8395c794-b14b-46f8-98a3-4e418aa464e3
	I0830 21:43:11.906905  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.906910  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.906916  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.906921  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.906929  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.907087  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:11.907371  978470 pod_ready.go:92] pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace has status "Ready":"True"
	I0830 21:43:11.907385  978470 pod_ready.go:81] duration metric: took 7.519203494s waiting for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:11.907394  978470 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:11.907444  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-752665
	I0830 21:43:11.907452  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.907462  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.907468  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.909369  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:43:11.909382  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.909387  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.909392  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.909398  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.909403  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.909410  978470 round_trippers.go:580]     Audit-Id: f0626f5c-c7d5-4255-9d43-cb0fce5c48c8
	I0830 21:43:11.909418  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.909686  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-752665","namespace":"kube-system","uid":"25e2609d-f391-4e71-823a-c4fe8625092d","resourceVersion":"830","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.20:2379","kubernetes.io/config.hash":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.mirror":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.seen":"2023-08-30T21:32:35.235892298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0830 21:43:11.910060  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:11.910074  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.910081  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.910087  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.911962  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:43:11.911976  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.911983  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.911993  978470 round_trippers.go:580]     Audit-Id: d717d935-0020-4973-bf90-8c1cb0cb41c3
	I0830 21:43:11.911998  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.912004  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.912013  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.912021  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.912309  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:11.912594  978470 pod_ready.go:92] pod "etcd-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:43:11.912608  978470 pod_ready.go:81] duration metric: took 5.209382ms waiting for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:11.912622  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:11.912678  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-752665
	I0830 21:43:11.912685  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.912692  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.912698  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.915404  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:11.915418  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.915424  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.915429  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.915435  978470 round_trippers.go:580]     Audit-Id: c1b2bbe1-dad0-4297-b857-f7add58e33d2
	I0830 21:43:11.915447  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.915455  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.915465  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.916135  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-752665","namespace":"kube-system","uid":"d813d11d-d0ec-4091-a72b-187bd44eabe3","resourceVersion":"844","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.20:8443","kubernetes.io/config.hash":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.mirror":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.seen":"2023-08-30T21:32:26.214498990Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0830 21:43:11.916499  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:11.916511  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.916517  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.916523  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.918292  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:43:11.918311  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.918319  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.918325  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.918330  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.918335  978470 round_trippers.go:580]     Audit-Id: 8b85ca6c-6e65-441d-ae11-6013e3a5ef66
	I0830 21:43:11.918340  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.918345  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.918570  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:11.918955  978470 pod_ready.go:92] pod "kube-apiserver-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:43:11.918982  978470 pod_ready.go:81] duration metric: took 6.34762ms waiting for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:11.918994  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:11.919054  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-752665
	I0830 21:43:11.919065  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.919076  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.919085  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.929321  978470 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0830 21:43:11.929340  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.929349  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.929357  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.929365  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.929374  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.929383  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.929390  978470 round_trippers.go:580]     Audit-Id: 19e8b1d5-6fe9-4f86-bcad-ed2a37e98c05
	I0830 21:43:11.929929  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-752665","namespace":"kube-system","uid":"0391b35f-5177-412c-b7d4-073efb2de36b","resourceVersion":"846","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.mirror":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.seen":"2023-08-30T21:32:26.214500244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0830 21:43:11.930342  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:11.930355  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.930362  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.930370  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.934484  978470 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:43:11.934499  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.934505  978470 round_trippers.go:580]     Audit-Id: 1b0c9680-9e97-4fb2-b664-c7c73fb8bb5d
	I0830 21:43:11.934510  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.934516  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.934521  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.934526  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.934531  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.935272  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:11.935634  978470 pod_ready.go:92] pod "kube-controller-manager-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:43:11.935658  978470 pod_ready.go:81] duration metric: took 16.657455ms waiting for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:11.935666  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5twl5" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:11.935718  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:43:11.935726  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.935733  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.935739  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.939684  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:11.939704  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.939710  978470 round_trippers.go:580]     Audit-Id: fc67394a-aaec-41e9-b935-1ba9548e3728
	I0830 21:43:11.939716  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.939721  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.939727  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.939736  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.939746  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.940537  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5twl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff4250a4-1482-42c0-a523-e97faf806c43","resourceVersion":"477","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0830 21:43:11.940902  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:43:11.940915  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:11.940923  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:11.940928  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:11.948806  978470 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0830 21:43:11.948820  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:11.948826  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:11.948832  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:11.948837  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:11.948844  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:11.948850  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:11 GMT
	I0830 21:43:11.948855  978470 round_trippers.go:580]     Audit-Id: d48667a0-94ae-48e4-b3f7-518a5382d32b
	I0830 21:43:11.949489  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0","resourceVersion":"738","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I0830 21:43:11.949728  978470 pod_ready.go:92] pod "kube-proxy-5twl5" in "kube-system" namespace has status "Ready":"True"
	I0830 21:43:11.949743  978470 pod_ready.go:81] duration metric: took 14.072373ms waiting for pod "kube-proxy-5twl5" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:11.949751  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jwftn" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:12.101113  978470 request.go:629] Waited for 151.293824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwftn
	I0830 21:43:12.101197  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwftn
	I0830 21:43:12.101203  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:12.101212  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:12.101219  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:12.104117  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:12.104148  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:12.104159  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:12.104167  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:12.104172  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:12 GMT
	I0830 21:43:12.104179  978470 round_trippers.go:580]     Audit-Id: 4c776b27-5d64-4e8d-a6a8-f9c398405020
	I0830 21:43:12.104191  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:12.104200  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:12.104342  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jwftn","generateName":"kube-proxy-","namespace":"kube-system","uid":"bfc888c8-7790-4267-a1fc-cab9448e097b","resourceVersion":"675","creationTimestamp":"2023-08-30T21:34:21Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:34:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0830 21:43:12.301229  978470 request.go:629] Waited for 196.388402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m03
	I0830 21:43:12.301291  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m03
	I0830 21:43:12.301296  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:12.301304  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:12.301310  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:12.306370  978470 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0830 21:43:12.306386  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:12.306395  978470 round_trippers.go:580]     Audit-Id: 6d8e8970-9fac-4b25-860c-dc5b679dd0e8
	I0830 21:43:12.306403  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:12.306412  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:12.306419  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:12.306429  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:12.306441  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:12 GMT
	I0830 21:43:12.306638  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m03","uid":"2c7759fc-7c08-4ea2-b0c4-b56d98a23e6f","resourceVersion":"748","creationTimestamp":"2023-08-30T21:35:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:35:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3412 chars]
	I0830 21:43:12.306934  978470 pod_ready.go:92] pod "kube-proxy-jwftn" in "kube-system" namespace has status "Ready":"True"
	I0830 21:43:12.306954  978470 pod_ready.go:81] duration metric: took 357.197264ms waiting for pod "kube-proxy-jwftn" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:12.306968  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:12.501479  978470 request.go:629] Waited for 194.422982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:43:12.501559  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:43:12.501569  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:12.501578  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:12.501587  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:12.504276  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:12.504297  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:12.504308  978470 round_trippers.go:580]     Audit-Id: a2c22548-4ac4-4586-b2f5-8a88f378ef34
	I0830 21:43:12.504315  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:12.504323  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:12.504330  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:12.504339  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:12.504350  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:12 GMT
	I0830 21:43:12.504583  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vltx5","generateName":"kube-proxy-","namespace":"kube-system","uid":"24ee271e-5778-4d0c-ab2c-77426f2673b3","resourceVersion":"752","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0830 21:43:12.701421  978470 request.go:629] Waited for 196.368788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:12.701482  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:12.701486  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:12.701494  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:12.701501  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:12.704037  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:12.704057  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:12.704068  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:12.704077  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:12.704084  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:12.704092  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:12.704100  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:12 GMT
	I0830 21:43:12.704112  978470 round_trippers.go:580]     Audit-Id: 922a2ec8-9c84-4591-819d-9de076fc1b40
	I0830 21:43:12.704479  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:12.704801  978470 pod_ready.go:92] pod "kube-proxy-vltx5" in "kube-system" namespace has status "Ready":"True"
	I0830 21:43:12.704814  978470 pod_ready.go:81] duration metric: took 397.839388ms waiting for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:12.704826  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:12.901136  978470 request.go:629] Waited for 196.226159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:43:12.901227  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:43:12.901234  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:12.901250  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:12.901265  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:12.904035  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:12.904064  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:12.904076  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:12 GMT
	I0830 21:43:12.904084  978470 round_trippers.go:580]     Audit-Id: d40f5bbe-4d90-43d2-8cf0-8d21ee80e7f7
	I0830 21:43:12.904091  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:12.904100  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:12.904122  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:12.904134  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:12.904290  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-752665","namespace":"kube-system","uid":"4c8a6a98-51b6-4010-9519-add75ab1a7a9","resourceVersion":"842","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.mirror":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.seen":"2023-08-30T21:32:35.235897289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0830 21:43:13.101457  978470 request.go:629] Waited for 196.677941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:13.101538  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:43:13.101545  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:13.101564  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:13.101578  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:13.104389  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:13.104409  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:13.104416  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:13.104422  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:13.104427  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:13.104433  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:13 GMT
	I0830 21:43:13.104438  978470 round_trippers.go:580]     Audit-Id: 67234ffc-43ee-4b0c-9b0b-39370693f5d0
	I0830 21:43:13.104444  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:13.104762  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0830 21:43:13.105088  978470 pod_ready.go:92] pod "kube-scheduler-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:43:13.105101  978470 pod_ready.go:81] duration metric: took 400.26635ms waiting for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:43:13.105111  978470 pod_ready.go:38] duration metric: took 8.725576564s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:43:13.105130  978470 api_server.go:52] waiting for apiserver process to appear ...
	I0830 21:43:13.105214  978470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:43:13.118449  978470 command_runner.go:130] > 1104
	I0830 21:43:13.118489  978470 api_server.go:72] duration metric: took 14.619392388s to wait for apiserver process to appear ...
	I0830 21:43:13.118502  978470 api_server.go:88] waiting for apiserver healthz status ...
	I0830 21:43:13.118526  978470 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:43:13.124774  978470 api_server.go:279] https://192.168.39.20:8443/healthz returned 200:
	ok
	I0830 21:43:13.124863  978470 round_trippers.go:463] GET https://192.168.39.20:8443/version
	I0830 21:43:13.124874  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:13.124887  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:13.124899  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:13.126146  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:43:13.126161  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:13.126168  978470 round_trippers.go:580]     Audit-Id: 7622fdaf-42ae-4040-8dd6-f00ca94a0b71
	I0830 21:43:13.126182  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:13.126194  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:13.126208  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:13.126219  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:13.126232  978470 round_trippers.go:580]     Content-Length: 263
	I0830 21:43:13.126241  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:13 GMT
	I0830 21:43:13.126289  978470 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0830 21:43:13.126355  978470 api_server.go:141] control plane version: v1.28.1
	I0830 21:43:13.126373  978470 api_server.go:131] duration metric: took 7.864717ms to wait for apiserver health ...
	I0830 21:43:13.126382  978470 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 21:43:13.301900  978470 request.go:629] Waited for 175.400407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:43:13.301972  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:43:13.301977  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:13.301985  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:13.301992  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:13.306674  978470 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:43:13.306704  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:13.306715  978470 round_trippers.go:580]     Audit-Id: 69f90620-717f-44e7-bce8-2b1701b4f95c
	I0830 21:43:13.306724  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:13.306732  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:13.306744  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:13.306753  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:13.306762  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:13 GMT
	I0830 21:43:13.308682  978470 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"864"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"854","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81468 chars]
	I0830 21:43:13.311172  978470 system_pods.go:59] 12 kube-system pods found
	I0830 21:43:13.311196  978470 system_pods.go:61] "coredns-5dd5756b68-zcppg" [4742270b-6c64-411b-bfb6-8c53211aa106] Running
	I0830 21:43:13.311201  978470 system_pods.go:61] "etcd-multinode-752665" [25e2609d-f391-4e71-823a-c4fe8625092d] Running
	I0830 21:43:13.311205  978470 system_pods.go:61] "kindnet-4q5fx" [864ea4a7-8b4f-4690-90a3-a4c50a909f44] Running
	I0830 21:43:13.311214  978470 system_pods.go:61] "kindnet-d4xrz" [db9dcca6-eedf-4c5f-b3e8-785a4689b7ea] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0830 21:43:13.311226  978470 system_pods.go:61] "kindnet-x5kk4" [2fdd77f6-856a-4400-b881-210549c588e2] Running
	I0830 21:43:13.311233  978470 system_pods.go:61] "kube-apiserver-multinode-752665" [d813d11d-d0ec-4091-a72b-187bd44eabe3] Running
	I0830 21:43:13.311244  978470 system_pods.go:61] "kube-controller-manager-multinode-752665" [0391b35f-5177-412c-b7d4-073efb2de36b] Running
	I0830 21:43:13.311256  978470 system_pods.go:61] "kube-proxy-5twl5" [ff4250a4-1482-42c0-a523-e97faf806c43] Running
	I0830 21:43:13.311263  978470 system_pods.go:61] "kube-proxy-jwftn" [bfc888c8-7790-4267-a1fc-cab9448e097b] Running
	I0830 21:43:13.311267  978470 system_pods.go:61] "kube-proxy-vltx5" [24ee271e-5778-4d0c-ab2c-77426f2673b3] Running
	I0830 21:43:13.311273  978470 system_pods.go:61] "kube-scheduler-multinode-752665" [4c8a6a98-51b6-4010-9519-add75ab1a7a9] Running
	I0830 21:43:13.311277  978470 system_pods.go:61] "storage-provisioner" [67db5a8a-290a-40a7-b42e-212d99db812a] Running
	I0830 21:43:13.311285  978470 system_pods.go:74] duration metric: took 184.894538ms to wait for pod list to return data ...
	I0830 21:43:13.311296  978470 default_sa.go:34] waiting for default service account to be created ...
	I0830 21:43:13.501770  978470 request.go:629] Waited for 190.389095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/default/serviceaccounts
	I0830 21:43:13.501850  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/default/serviceaccounts
	I0830 21:43:13.501855  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:13.501863  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:13.501869  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:13.504765  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:43:13.504787  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:13.504795  978470 round_trippers.go:580]     Content-Length: 261
	I0830 21:43:13.504806  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:13 GMT
	I0830 21:43:13.504819  978470 round_trippers.go:580]     Audit-Id: bb4c9d41-8398-430e-b9ea-7fa6432d2739
	I0830 21:43:13.504831  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:13.504856  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:13.504864  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:13.504871  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:13.504902  978470 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"864"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"59f61465-a2d3-4fe6-934b-a1516977e952","resourceVersion":"315","creationTimestamp":"2023-08-30T21:32:47Z"}}]}
	I0830 21:43:13.505120  978470 default_sa.go:45] found service account: "default"
	I0830 21:43:13.505141  978470 default_sa.go:55] duration metric: took 193.837956ms for default service account to be created ...
	I0830 21:43:13.505152  978470 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 21:43:13.701717  978470 request.go:629] Waited for 196.443485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:43:13.701806  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:43:13.701813  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:13.701828  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:13.701838  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:13.705782  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:13.705803  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:13.705811  978470 round_trippers.go:580]     Audit-Id: 4d0b56e9-f0b4-4112-9670-9451f6a73f40
	I0830 21:43:13.705817  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:13.705824  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:13.705832  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:13.705840  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:13.705847  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:13 GMT
	I0830 21:43:13.707750  978470 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"864"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"854","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81468 chars]
	I0830 21:43:13.710233  978470 system_pods.go:86] 12 kube-system pods found
	I0830 21:43:13.710256  978470 system_pods.go:89] "coredns-5dd5756b68-zcppg" [4742270b-6c64-411b-bfb6-8c53211aa106] Running
	I0830 21:43:13.710261  978470 system_pods.go:89] "etcd-multinode-752665" [25e2609d-f391-4e71-823a-c4fe8625092d] Running
	I0830 21:43:13.710265  978470 system_pods.go:89] "kindnet-4q5fx" [864ea4a7-8b4f-4690-90a3-a4c50a909f44] Running
	I0830 21:43:13.710273  978470 system_pods.go:89] "kindnet-d4xrz" [db9dcca6-eedf-4c5f-b3e8-785a4689b7ea] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0830 21:43:13.710279  978470 system_pods.go:89] "kindnet-x5kk4" [2fdd77f6-856a-4400-b881-210549c588e2] Running
	I0830 21:43:13.710284  978470 system_pods.go:89] "kube-apiserver-multinode-752665" [d813d11d-d0ec-4091-a72b-187bd44eabe3] Running
	I0830 21:43:13.710289  978470 system_pods.go:89] "kube-controller-manager-multinode-752665" [0391b35f-5177-412c-b7d4-073efb2de36b] Running
	I0830 21:43:13.710295  978470 system_pods.go:89] "kube-proxy-5twl5" [ff4250a4-1482-42c0-a523-e97faf806c43] Running
	I0830 21:43:13.710299  978470 system_pods.go:89] "kube-proxy-jwftn" [bfc888c8-7790-4267-a1fc-cab9448e097b] Running
	I0830 21:43:13.710303  978470 system_pods.go:89] "kube-proxy-vltx5" [24ee271e-5778-4d0c-ab2c-77426f2673b3] Running
	I0830 21:43:13.710306  978470 system_pods.go:89] "kube-scheduler-multinode-752665" [4c8a6a98-51b6-4010-9519-add75ab1a7a9] Running
	I0830 21:43:13.710310  978470 system_pods.go:89] "storage-provisioner" [67db5a8a-290a-40a7-b42e-212d99db812a] Running
	I0830 21:43:13.710316  978470 system_pods.go:126] duration metric: took 205.155619ms to wait for k8s-apps to be running ...
	I0830 21:43:13.710322  978470 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 21:43:13.710368  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:43:13.723351  978470 system_svc.go:56] duration metric: took 13.020062ms WaitForService to wait for kubelet.
	I0830 21:43:13.723379  978470 kubeadm.go:581] duration metric: took 15.224281597s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 21:43:13.723402  978470 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:43:13.901831  978470 request.go:629] Waited for 178.341206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes
	I0830 21:43:13.901909  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes
	I0830 21:43:13.901916  978470 round_trippers.go:469] Request Headers:
	I0830 21:43:13.901924  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:43:13.901933  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:43:13.904972  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:43:13.905000  978470 round_trippers.go:577] Response Headers:
	I0830 21:43:13.905010  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:43:13.905018  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:43:13 GMT
	I0830 21:43:13.905032  978470 round_trippers.go:580]     Audit-Id: cc74022c-0c65-4a88-8de4-1c92497a3730
	I0830 21:43:13.905043  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:43:13.905050  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:43:13.905061  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:43:13.905302  978470 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"864"},"items":[{"metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"825","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15073 chars]
	I0830 21:43:13.905919  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:43:13.905941  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:43:13.905954  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:43:13.905961  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:43:13.905967  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:43:13.905975  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:43:13.905981  978470 node_conditions.go:105] duration metric: took 182.571582ms to run NodePressure ...
	I0830 21:43:13.906004  978470 start.go:228] waiting for startup goroutines ...
	I0830 21:43:13.906014  978470 start.go:233] waiting for cluster config update ...
	I0830 21:43:13.906028  978470 start.go:242] writing updated cluster config ...
	I0830 21:43:13.906521  978470 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:43:13.906638  978470 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:43:13.909192  978470 out.go:177] * Starting worker node multinode-752665-m02 in cluster multinode-752665
	I0830 21:43:13.910556  978470 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:43:13.910579  978470 cache.go:57] Caching tarball of preloaded images
	I0830 21:43:13.910656  978470 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 21:43:13.910666  978470 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 21:43:13.910760  978470 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:43:13.910916  978470 start.go:365] acquiring machines lock for multinode-752665-m02: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 21:43:13.910956  978470 start.go:369] acquired machines lock for "multinode-752665-m02" in 22.675µs
	I0830 21:43:13.910970  978470 start.go:96] Skipping create...Using existing machine configuration
	I0830 21:43:13.910978  978470 fix.go:54] fixHost starting: m02
	I0830 21:43:13.911238  978470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:43:13.911261  978470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:43:13.925848  978470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42605
	I0830 21:43:13.926305  978470 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:43:13.926836  978470 main.go:141] libmachine: Using API Version  1
	I0830 21:43:13.926856  978470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:43:13.927259  978470 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:43:13.927474  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:43:13.927652  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetState
	I0830 21:43:13.929152  978470 fix.go:102] recreateIfNeeded on multinode-752665-m02: state=Running err=<nil>
	W0830 21:43:13.929168  978470 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 21:43:13.931165  978470 out.go:177] * Updating the running kvm2 "multinode-752665-m02" VM ...
	I0830 21:43:13.933078  978470 machine.go:88] provisioning docker machine ...
	I0830 21:43:13.933103  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:43:13.933330  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetMachineName
	I0830 21:43:13.933491  978470 buildroot.go:166] provisioning hostname "multinode-752665-m02"
	I0830 21:43:13.933513  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetMachineName
	I0830 21:43:13.933627  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:43:13.936110  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:13.936492  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:43:13.936525  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:13.936658  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:43:13.936867  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:43:13.937014  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:43:13.937163  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:43:13.937337  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:43:13.937772  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0830 21:43:13.937788  978470 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-752665-m02 && echo "multinode-752665-m02" | sudo tee /etc/hostname
	I0830 21:43:14.074795  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-752665-m02
	
	I0830 21:43:14.074830  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:43:14.077977  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:14.078422  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:43:14.078459  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:14.078615  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:43:14.078812  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:43:14.078942  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:43:14.079048  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:43:14.079181  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:43:14.079594  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0830 21:43:14.079613  978470 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-752665-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-752665-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-752665-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:43:14.196521  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:43:14.196552  978470 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 21:43:14.196578  978470 buildroot.go:174] setting up certificates
	I0830 21:43:14.196594  978470 provision.go:83] configureAuth start
	I0830 21:43:14.196604  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetMachineName
	I0830 21:43:14.196905  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetIP
	I0830 21:43:14.199747  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:14.200176  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:43:14.200220  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:14.200351  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:43:14.202569  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:14.202953  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:43:14.202984  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:14.203118  978470 provision.go:138] copyHostCerts
	I0830 21:43:14.203150  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:43:14.203183  978470 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 21:43:14.203194  978470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:43:14.203257  978470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 21:43:14.203327  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:43:14.203344  978470 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 21:43:14.203350  978470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:43:14.203373  978470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 21:43:14.203411  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:43:14.203426  978470 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 21:43:14.203432  978470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:43:14.203452  978470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 21:43:14.203493  978470 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.multinode-752665-m02 san=[192.168.39.46 192.168.39.46 localhost 127.0.0.1 minikube multinode-752665-m02]
	I0830 21:43:14.330720  978470 provision.go:172] copyRemoteCerts
	I0830 21:43:14.330782  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:43:14.330810  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:43:14.333655  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:14.334011  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:43:14.334044  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:14.334238  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:43:14.334438  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:43:14.334588  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:43:14.334731  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa Username:docker}
	I0830 21:43:14.425245  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 21:43:14.425318  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 21:43:14.451651  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 21:43:14.451724  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0830 21:43:14.475074  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 21:43:14.475135  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 21:43:14.497604  978470 provision.go:86] duration metric: configureAuth took 300.997004ms
	I0830 21:43:14.497630  978470 buildroot.go:189] setting minikube options for container-runtime
	I0830 21:43:14.497852  978470 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:43:14.497929  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:43:14.500443  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:14.500960  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:43:14.500993  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:43:14.501201  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:43:14.501421  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:43:14.501617  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:43:14.501759  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:43:14.501907  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:43:14.502308  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0830 21:43:14.502324  978470 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:44:45.087676  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:44:45.087737  978470 machine.go:91] provisioned docker machine in 1m31.154629576s
	I0830 21:44:45.087756  978470 start.go:300] post-start starting for "multinode-752665-m02" (driver="kvm2")
	I0830 21:44:45.087819  978470 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:44:45.087855  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:44:45.088186  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:44:45.088232  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:44:45.091127  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:45.091508  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:44:45.091538  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:45.091730  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:44:45.091925  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:44:45.092098  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:44:45.092281  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa Username:docker}
	I0830 21:44:45.181241  978470 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:44:45.185376  978470 command_runner.go:130] > NAME=Buildroot
	I0830 21:44:45.185400  978470 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0830 21:44:45.185407  978470 command_runner.go:130] > ID=buildroot
	I0830 21:44:45.185423  978470 command_runner.go:130] > VERSION_ID=2021.02.12
	I0830 21:44:45.185433  978470 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0830 21:44:45.185505  978470 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 21:44:45.185538  978470 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 21:44:45.185617  978470 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 21:44:45.185723  978470 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 21:44:45.185739  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /etc/ssl/certs/9626212.pem
	I0830 21:44:45.185850  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 21:44:45.193680  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:44:45.217261  978470 start.go:303] post-start completed in 129.486902ms
	I0830 21:44:45.217288  978470 fix.go:56] fixHost completed within 1m31.306309024s
	I0830 21:44:45.217320  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:44:45.220089  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:45.220392  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:44:45.220420  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:45.220584  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:44:45.220831  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:44:45.221019  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:44:45.221180  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:44:45.221347  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:44:45.221813  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0830 21:44:45.221828  978470 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 21:44:45.340559  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693431885.332228726
	
	I0830 21:44:45.340587  978470 fix.go:206] guest clock: 1693431885.332228726
	I0830 21:44:45.340597  978470 fix.go:219] Guest: 2023-08-30 21:44:45.332228726 +0000 UTC Remote: 2023-08-30 21:44:45.217293281 +0000 UTC m=+452.418904854 (delta=114.935445ms)
	I0830 21:44:45.340656  978470 fix.go:190] guest clock delta is within tolerance: 114.935445ms
	I0830 21:44:45.340664  978470 start.go:83] releasing machines lock for "multinode-752665-m02", held for 1m31.429697778s
	I0830 21:44:45.340701  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:44:45.341005  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetIP
	I0830 21:44:45.343628  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:45.344133  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:44:45.344166  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:45.346401  978470 out.go:177] * Found network options:
	I0830 21:44:45.348233  978470 out.go:177]   - NO_PROXY=192.168.39.20
	W0830 21:44:45.349697  978470 proxy.go:119] fail to check proxy env: Error ip not in block
	I0830 21:44:45.349727  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:44:45.350317  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:44:45.350501  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:44:45.350584  978470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:44:45.350631  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	W0830 21:44:45.350727  978470 proxy.go:119] fail to check proxy env: Error ip not in block
	I0830 21:44:45.350806  978470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:44:45.350825  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:44:45.353317  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:45.353544  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:45.353723  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:44:45.353753  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:45.353866  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:44:45.354002  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:44:45.354038  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:45.354039  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:44:45.354177  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:44:45.354180  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:44:45.354386  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa Username:docker}
	I0830 21:44:45.354399  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:44:45.354571  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:44:45.354702  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa Username:docker}
	I0830 21:44:45.466439  978470 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0830 21:44:45.585232  978470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 21:44:45.591414  978470 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0830 21:44:45.591446  978470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 21:44:45.591499  978470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:44:45.600761  978470 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0830 21:44:45.600791  978470 start.go:466] detecting cgroup driver to use...
	I0830 21:44:45.600870  978470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:44:45.615477  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:44:45.628104  978470 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:44:45.628168  978470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:44:45.642537  978470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:44:45.656271  978470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:44:45.798694  978470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:44:45.945801  978470 docker.go:212] disabling docker service ...
	I0830 21:44:45.945879  978470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:44:45.962523  978470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:44:45.976847  978470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:44:46.124670  978470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:44:46.269102  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:44:46.283948  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:44:46.303696  978470 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0830 21:44:46.304410  978470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 21:44:46.304472  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:44:46.316540  978470 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:44:46.316613  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:44:46.326692  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:44:46.336964  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:44:46.347285  978470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:44:46.358086  978470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:44:46.367501  978470 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0830 21:44:46.367583  978470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:44:46.376048  978470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:44:46.507680  978470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:44:46.754760  978470 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:44:46.754838  978470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:44:46.760320  978470 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0830 21:44:46.760351  978470 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0830 21:44:46.760368  978470 command_runner.go:130] > Device: 16h/22d	Inode: 1211        Links: 1
	I0830 21:44:46.760378  978470 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 21:44:46.760386  978470 command_runner.go:130] > Access: 2023-08-30 21:44:46.674362438 +0000
	I0830 21:44:46.760396  978470 command_runner.go:130] > Modify: 2023-08-30 21:44:46.674362438 +0000
	I0830 21:44:46.760404  978470 command_runner.go:130] > Change: 2023-08-30 21:44:46.674362438 +0000
	I0830 21:44:46.760409  978470 command_runner.go:130] >  Birth: -
	I0830 21:44:46.760436  978470 start.go:534] Will wait 60s for crictl version
	I0830 21:44:46.760492  978470 ssh_runner.go:195] Run: which crictl
	I0830 21:44:46.764711  978470 command_runner.go:130] > /usr/bin/crictl
	I0830 21:44:46.764781  978470 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:44:46.805860  978470 command_runner.go:130] > Version:  0.1.0
	I0830 21:44:46.805885  978470 command_runner.go:130] > RuntimeName:  cri-o
	I0830 21:44:46.805907  978470 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0830 21:44:46.805917  978470 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0830 21:44:46.807197  978470 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 21:44:46.807271  978470 ssh_runner.go:195] Run: crio --version
	I0830 21:44:46.853844  978470 command_runner.go:130] > crio version 1.24.1
	I0830 21:44:46.853868  978470 command_runner.go:130] > Version:          1.24.1
	I0830 21:44:46.853875  978470 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0830 21:44:46.853879  978470 command_runner.go:130] > GitTreeState:     dirty
	I0830 21:44:46.853885  978470 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0830 21:44:46.853890  978470 command_runner.go:130] > GoVersion:        go1.19.9
	I0830 21:44:46.853894  978470 command_runner.go:130] > Compiler:         gc
	I0830 21:44:46.853898  978470 command_runner.go:130] > Platform:         linux/amd64
	I0830 21:44:46.853911  978470 command_runner.go:130] > Linkmode:         dynamic
	I0830 21:44:46.853918  978470 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 21:44:46.853922  978470 command_runner.go:130] > SeccompEnabled:   true
	I0830 21:44:46.853928  978470 command_runner.go:130] > AppArmorEnabled:  false
	I0830 21:44:46.855497  978470 ssh_runner.go:195] Run: crio --version
	I0830 21:44:46.901946  978470 command_runner.go:130] > crio version 1.24.1
	I0830 21:44:46.901975  978470 command_runner.go:130] > Version:          1.24.1
	I0830 21:44:46.901986  978470 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0830 21:44:46.901992  978470 command_runner.go:130] > GitTreeState:     dirty
	I0830 21:44:46.902000  978470 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0830 21:44:46.902008  978470 command_runner.go:130] > GoVersion:        go1.19.9
	I0830 21:44:46.902015  978470 command_runner.go:130] > Compiler:         gc
	I0830 21:44:46.902022  978470 command_runner.go:130] > Platform:         linux/amd64
	I0830 21:44:46.902029  978470 command_runner.go:130] > Linkmode:         dynamic
	I0830 21:44:46.902041  978470 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 21:44:46.902047  978470 command_runner.go:130] > SeccompEnabled:   true
	I0830 21:44:46.902054  978470 command_runner.go:130] > AppArmorEnabled:  false
	I0830 21:44:46.904144  978470 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 21:44:46.905586  978470 out.go:177]   - env NO_PROXY=192.168.39.20
	I0830 21:44:46.907079  978470 main.go:141] libmachine: (multinode-752665-m02) Calling .GetIP
	I0830 21:44:46.909873  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:46.910243  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:44:46.910280  978470 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:44:46.910452  978470 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 21:44:46.914976  978470 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0830 21:44:46.915134  978470 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665 for IP: 192.168.39.46
	I0830 21:44:46.915157  978470 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:44:46.915313  978470 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 21:44:46.915362  978470 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 21:44:46.915376  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 21:44:46.915393  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 21:44:46.915411  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 21:44:46.915432  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 21:44:46.915500  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 21:44:46.915545  978470 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 21:44:46.915558  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 21:44:46.915592  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 21:44:46.915630  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:44:46.915658  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 21:44:46.915718  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:44:46.915752  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:44:46.915788  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem -> /usr/share/ca-certificates/962621.pem
	I0830 21:44:46.915805  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /usr/share/ca-certificates/9626212.pem
	I0830 21:44:46.916310  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:44:46.941381  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:44:46.965150  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:44:46.988819  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 21:44:47.011679  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:44:47.036161  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 21:44:47.060318  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 21:44:47.083339  978470 ssh_runner.go:195] Run: openssl version
	I0830 21:44:47.088759  978470 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0830 21:44:47.089237  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 21:44:47.099170  978470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 21:44:47.104529  978470 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:44:47.104565  978470 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:44:47.104618  978470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 21:44:47.110358  978470 command_runner.go:130] > 3ec20f2e
	I0830 21:44:47.110555  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 21:44:47.118728  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:44:47.128175  978470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:44:47.132572  978470 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:44:47.132738  978470 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:44:47.132792  978470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:44:47.137941  978470 command_runner.go:130] > b5213941
	I0830 21:44:47.138157  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:44:47.146118  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 21:44:47.155687  978470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 21:44:47.160370  978470 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:44:47.160398  978470 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:44:47.160444  978470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 21:44:47.166023  978470 command_runner.go:130] > 51391683
	I0830 21:44:47.166095  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 21:44:47.174363  978470 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:44:47.178395  978470 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:44:47.178424  978470 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:44:47.178503  978470 ssh_runner.go:195] Run: crio config
	I0830 21:44:47.230559  978470 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0830 21:44:47.230607  978470 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0830 21:44:47.230616  978470 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0830 21:44:47.230620  978470 command_runner.go:130] > #
	I0830 21:44:47.230630  978470 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0830 21:44:47.230639  978470 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0830 21:44:47.230649  978470 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0830 21:44:47.230661  978470 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0830 21:44:47.230676  978470 command_runner.go:130] > # reload'.
	I0830 21:44:47.230685  978470 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0830 21:44:47.230695  978470 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0830 21:44:47.230705  978470 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0830 21:44:47.230720  978470 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0830 21:44:47.230726  978470 command_runner.go:130] > [crio]
	I0830 21:44:47.230736  978470 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0830 21:44:47.230744  978470 command_runner.go:130] > # containers images, in this directory.
	I0830 21:44:47.230781  978470 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0830 21:44:47.230804  978470 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0830 21:44:47.231239  978470 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0830 21:44:47.231253  978470 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0830 21:44:47.231259  978470 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0830 21:44:47.231466  978470 command_runner.go:130] > storage_driver = "overlay"
	I0830 21:44:47.231477  978470 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0830 21:44:47.231483  978470 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0830 21:44:47.231488  978470 command_runner.go:130] > storage_option = [
	I0830 21:44:47.231883  978470 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0830 21:44:47.231947  978470 command_runner.go:130] > ]
	I0830 21:44:47.231963  978470 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0830 21:44:47.231977  978470 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0830 21:44:47.232599  978470 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0830 21:44:47.232611  978470 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0830 21:44:47.232618  978470 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0830 21:44:47.232623  978470 command_runner.go:130] > # always happen on a node reboot
	I0830 21:44:47.233295  978470 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0830 21:44:47.233309  978470 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0830 21:44:47.233315  978470 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0830 21:44:47.233327  978470 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0830 21:44:47.234049  978470 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0830 21:44:47.234070  978470 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0830 21:44:47.234084  978470 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0830 21:44:47.234104  978470 command_runner.go:130] > # internal_wipe = true
	I0830 21:44:47.234117  978470 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0830 21:44:47.234128  978470 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0830 21:44:47.234140  978470 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0830 21:44:47.234149  978470 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0830 21:44:47.234168  978470 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0830 21:44:47.234176  978470 command_runner.go:130] > [crio.api]
	I0830 21:44:47.234182  978470 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0830 21:44:47.234201  978470 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0830 21:44:47.234213  978470 command_runner.go:130] > # IP address on which the stream server will listen.
	I0830 21:44:47.234221  978470 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0830 21:44:47.234236  978470 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0830 21:44:47.234248  978470 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0830 21:44:47.234256  978470 command_runner.go:130] > # stream_port = "0"
	I0830 21:44:47.234264  978470 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0830 21:44:47.234286  978470 command_runner.go:130] > # stream_enable_tls = false
	I0830 21:44:47.234298  978470 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0830 21:44:47.234305  978470 command_runner.go:130] > # stream_idle_timeout = ""
	I0830 21:44:47.234320  978470 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0830 21:44:47.234335  978470 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0830 21:44:47.234344  978470 command_runner.go:130] > # minutes.
	I0830 21:44:47.234353  978470 command_runner.go:130] > # stream_tls_cert = ""
	I0830 21:44:47.234365  978470 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0830 21:44:47.234373  978470 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0830 21:44:47.234379  978470 command_runner.go:130] > # stream_tls_key = ""
	I0830 21:44:47.234388  978470 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0830 21:44:47.234403  978470 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0830 21:44:47.234415  978470 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0830 21:44:47.234425  978470 command_runner.go:130] > # stream_tls_ca = ""
	I0830 21:44:47.234441  978470 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 21:44:47.234452  978470 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0830 21:44:47.234463  978470 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 21:44:47.234474  978470 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0830 21:44:47.234493  978470 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0830 21:44:47.234504  978470 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0830 21:44:47.234513  978470 command_runner.go:130] > [crio.runtime]
	I0830 21:44:47.234522  978470 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0830 21:44:47.234534  978470 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0830 21:44:47.234544  978470 command_runner.go:130] > # "nofile=1024:2048"
	I0830 21:44:47.234555  978470 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0830 21:44:47.234564  978470 command_runner.go:130] > # default_ulimits = [
	I0830 21:44:47.234570  978470 command_runner.go:130] > # ]
	I0830 21:44:47.234579  978470 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0830 21:44:47.234588  978470 command_runner.go:130] > # no_pivot = false
	I0830 21:44:47.234597  978470 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0830 21:44:47.234610  978470 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0830 21:44:47.234621  978470 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0830 21:44:47.234629  978470 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0830 21:44:47.234638  978470 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0830 21:44:47.234649  978470 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 21:44:47.234660  978470 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0830 21:44:47.234668  978470 command_runner.go:130] > # Cgroup setting for conmon
	I0830 21:44:47.234679  978470 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0830 21:44:47.234689  978470 command_runner.go:130] > conmon_cgroup = "pod"
	I0830 21:44:47.234703  978470 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0830 21:44:47.234715  978470 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0830 21:44:47.234729  978470 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 21:44:47.234738  978470 command_runner.go:130] > conmon_env = [
	I0830 21:44:47.234746  978470 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0830 21:44:47.234752  978470 command_runner.go:130] > ]
	I0830 21:44:47.234757  978470 command_runner.go:130] > # Additional environment variables to set for all the
	I0830 21:44:47.234763  978470 command_runner.go:130] > # containers. These are overridden if set in the
	I0830 21:44:47.234769  978470 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0830 21:44:47.234780  978470 command_runner.go:130] > # default_env = [
	I0830 21:44:47.234785  978470 command_runner.go:130] > # ]
	I0830 21:44:47.234794  978470 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0830 21:44:47.234799  978470 command_runner.go:130] > # selinux = false
	I0830 21:44:47.234809  978470 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0830 21:44:47.234821  978470 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0830 21:44:47.234831  978470 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0830 21:44:47.234840  978470 command_runner.go:130] > # seccomp_profile = ""
	I0830 21:44:47.234849  978470 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0830 21:44:47.234866  978470 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0830 21:44:47.234899  978470 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0830 21:44:47.234908  978470 command_runner.go:130] > # which might increase security.
	I0830 21:44:47.234915  978470 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0830 21:44:47.234929  978470 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0830 21:44:47.234940  978470 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0830 21:44:47.234954  978470 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0830 21:44:47.234965  978470 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0830 21:44:47.234975  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:44:47.234982  978470 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0830 21:44:47.234992  978470 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0830 21:44:47.235001  978470 command_runner.go:130] > # the cgroup blockio controller.
	I0830 21:44:47.235008  978470 command_runner.go:130] > # blockio_config_file = ""
	I0830 21:44:47.235021  978470 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0830 21:44:47.235031  978470 command_runner.go:130] > # irqbalance daemon.
	I0830 21:44:47.235039  978470 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0830 21:44:47.235053  978470 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0830 21:44:47.235065  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:44:47.235077  978470 command_runner.go:130] > # rdt_config_file = ""
	I0830 21:44:47.235089  978470 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0830 21:44:47.235098  978470 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0830 21:44:47.235106  978470 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0830 21:44:47.235113  978470 command_runner.go:130] > # separate_pull_cgroup = ""
	I0830 21:44:47.235124  978470 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0830 21:44:47.235137  978470 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0830 21:44:47.235143  978470 command_runner.go:130] > # will be added.
	I0830 21:44:47.235158  978470 command_runner.go:130] > # default_capabilities = [
	I0830 21:44:47.235167  978470 command_runner.go:130] > # 	"CHOWN",
	I0830 21:44:47.235174  978470 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0830 21:44:47.235183  978470 command_runner.go:130] > # 	"FSETID",
	I0830 21:44:47.235189  978470 command_runner.go:130] > # 	"FOWNER",
	I0830 21:44:47.235197  978470 command_runner.go:130] > # 	"SETGID",
	I0830 21:44:47.235204  978470 command_runner.go:130] > # 	"SETUID",
	I0830 21:44:47.235212  978470 command_runner.go:130] > # 	"SETPCAP",
	I0830 21:44:47.235219  978470 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0830 21:44:47.235227  978470 command_runner.go:130] > # 	"KILL",
	I0830 21:44:47.235233  978470 command_runner.go:130] > # ]
	I0830 21:44:47.235243  978470 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0830 21:44:47.235256  978470 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 21:44:47.235266  978470 command_runner.go:130] > # default_sysctls = [
	I0830 21:44:47.235272  978470 command_runner.go:130] > # ]
	I0830 21:44:47.235282  978470 command_runner.go:130] > # List of devices on the host that a
	I0830 21:44:47.235292  978470 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0830 21:44:47.235302  978470 command_runner.go:130] > # allowed_devices = [
	I0830 21:44:47.235309  978470 command_runner.go:130] > # 	"/dev/fuse",
	I0830 21:44:47.235317  978470 command_runner.go:130] > # ]
	I0830 21:44:47.235326  978470 command_runner.go:130] > # List of additional devices. specified as
	I0830 21:44:47.235338  978470 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0830 21:44:47.235351  978470 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0830 21:44:47.235376  978470 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 21:44:47.235386  978470 command_runner.go:130] > # additional_devices = [
	I0830 21:44:47.235392  978470 command_runner.go:130] > # ]
	I0830 21:44:47.235401  978470 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0830 21:44:47.235405  978470 command_runner.go:130] > # cdi_spec_dirs = [
	I0830 21:44:47.235413  978470 command_runner.go:130] > # 	"/etc/cdi",
	I0830 21:44:47.235417  978470 command_runner.go:130] > # 	"/var/run/cdi",
	I0830 21:44:47.235420  978470 command_runner.go:130] > # ]
	I0830 21:44:47.235428  978470 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0830 21:44:47.235434  978470 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0830 21:44:47.235441  978470 command_runner.go:130] > # Defaults to false.
	I0830 21:44:47.235446  978470 command_runner.go:130] > # device_ownership_from_security_context = false
	I0830 21:44:47.235452  978470 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0830 21:44:47.235458  978470 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0830 21:44:47.235464  978470 command_runner.go:130] > # hooks_dir = [
	I0830 21:44:47.235468  978470 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0830 21:44:47.235473  978470 command_runner.go:130] > # ]
	I0830 21:44:47.235479  978470 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0830 21:44:47.235487  978470 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0830 21:44:47.235492  978470 command_runner.go:130] > # its default mounts from the following two files:
	I0830 21:44:47.235498  978470 command_runner.go:130] > #
	I0830 21:44:47.235503  978470 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0830 21:44:47.235512  978470 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0830 21:44:47.235518  978470 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0830 21:44:47.235523  978470 command_runner.go:130] > #
	I0830 21:44:47.235528  978470 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0830 21:44:47.235535  978470 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0830 21:44:47.235543  978470 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0830 21:44:47.235548  978470 command_runner.go:130] > #      only add mounts it finds in this file.
	I0830 21:44:47.235553  978470 command_runner.go:130] > #
	I0830 21:44:47.235557  978470 command_runner.go:130] > # default_mounts_file = ""
	I0830 21:44:47.235562  978470 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0830 21:44:47.235570  978470 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0830 21:44:47.235574  978470 command_runner.go:130] > pids_limit = 1024
	I0830 21:44:47.235583  978470 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0830 21:44:47.235589  978470 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0830 21:44:47.235597  978470 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0830 21:44:47.235606  978470 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0830 21:44:47.235612  978470 command_runner.go:130] > # log_size_max = -1
	I0830 21:44:47.235623  978470 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0830 21:44:47.235632  978470 command_runner.go:130] > # log_to_journald = false
	I0830 21:44:47.235666  978470 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0830 21:44:47.235674  978470 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0830 21:44:47.235679  978470 command_runner.go:130] > # Path to directory for container attach sockets.
	I0830 21:44:47.235683  978470 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0830 21:44:47.235691  978470 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0830 21:44:47.235695  978470 command_runner.go:130] > # bind_mount_prefix = ""
	I0830 21:44:47.235702  978470 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0830 21:44:47.235706  978470 command_runner.go:130] > # read_only = false
	I0830 21:44:47.235716  978470 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0830 21:44:47.235730  978470 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0830 21:44:47.235741  978470 command_runner.go:130] > # live configuration reload.
	I0830 21:44:47.235749  978470 command_runner.go:130] > # log_level = "info"
	I0830 21:44:47.235756  978470 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0830 21:44:47.235763  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:44:47.235767  978470 command_runner.go:130] > # log_filter = ""
	I0830 21:44:47.235797  978470 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0830 21:44:47.235811  978470 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0830 21:44:47.235818  978470 command_runner.go:130] > # separated by comma.
	I0830 21:44:47.235826  978470 command_runner.go:130] > # uid_mappings = ""
	I0830 21:44:47.235835  978470 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0830 21:44:47.235844  978470 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0830 21:44:47.235848  978470 command_runner.go:130] > # separated by comma.
	I0830 21:44:47.235855  978470 command_runner.go:130] > # gid_mappings = ""
	I0830 21:44:47.235862  978470 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0830 21:44:47.235875  978470 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 21:44:47.235890  978470 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 21:44:47.235900  978470 command_runner.go:130] > # minimum_mappable_uid = -1
	I0830 21:44:47.235913  978470 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0830 21:44:47.235927  978470 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 21:44:47.235935  978470 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 21:44:47.235942  978470 command_runner.go:130] > # minimum_mappable_gid = -1
	I0830 21:44:47.235948  978470 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0830 21:44:47.235962  978470 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0830 21:44:47.235975  978470 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0830 21:44:47.235985  978470 command_runner.go:130] > # ctr_stop_timeout = 30
	I0830 21:44:47.235998  978470 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0830 21:44:47.236012  978470 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0830 21:44:47.236022  978470 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0830 21:44:47.236030  978470 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0830 21:44:47.236036  978470 command_runner.go:130] > drop_infra_ctr = false
	I0830 21:44:47.236047  978470 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0830 21:44:47.236060  978470 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0830 21:44:47.236075  978470 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0830 21:44:47.236085  978470 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0830 21:44:47.236096  978470 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0830 21:44:47.236107  978470 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0830 21:44:47.236115  978470 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0830 21:44:47.236123  978470 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0830 21:44:47.236133  978470 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0830 21:44:47.236144  978470 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0830 21:44:47.236162  978470 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0830 21:44:47.236176  978470 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0830 21:44:47.236187  978470 command_runner.go:130] > # default_runtime = "runc"
	I0830 21:44:47.236198  978470 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0830 21:44:47.236210  978470 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0830 21:44:47.236230  978470 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0830 21:44:47.236242  978470 command_runner.go:130] > # creation as a file is not desired either.
	I0830 21:44:47.236259  978470 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0830 21:44:47.236271  978470 command_runner.go:130] > # the hostname is being managed dynamically.
	I0830 21:44:47.236282  978470 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0830 21:44:47.236288  978470 command_runner.go:130] > # ]
	I0830 21:44:47.236296  978470 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0830 21:44:47.236310  978470 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0830 21:44:47.236324  978470 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0830 21:44:47.236337  978470 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0830 21:44:47.236345  978470 command_runner.go:130] > #
	I0830 21:44:47.236357  978470 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0830 21:44:47.236367  978470 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0830 21:44:47.236376  978470 command_runner.go:130] > #  runtime_type = "oci"
	I0830 21:44:47.236386  978470 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0830 21:44:47.236396  978470 command_runner.go:130] > #  privileged_without_host_devices = false
	I0830 21:44:47.236406  978470 command_runner.go:130] > #  allowed_annotations = []
	I0830 21:44:47.236416  978470 command_runner.go:130] > # Where:
	I0830 21:44:47.236427  978470 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0830 21:44:47.236440  978470 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0830 21:44:47.236478  978470 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0830 21:44:47.236491  978470 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0830 21:44:47.236501  978470 command_runner.go:130] > #   in $PATH.
	I0830 21:44:47.236515  978470 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0830 21:44:47.236526  978470 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0830 21:44:47.236539  978470 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0830 21:44:47.236548  978470 command_runner.go:130] > #   state.
	I0830 21:44:47.236559  978470 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0830 21:44:47.236567  978470 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0830 21:44:47.236576  978470 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0830 21:44:47.236583  978470 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0830 21:44:47.236589  978470 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0830 21:44:47.236598  978470 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0830 21:44:47.236605  978470 command_runner.go:130] > #   The currently recognized values are:
	I0830 21:44:47.236611  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0830 21:44:47.236620  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0830 21:44:47.236632  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0830 21:44:47.236644  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0830 21:44:47.236660  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0830 21:44:47.236675  978470 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0830 21:44:47.236688  978470 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0830 21:44:47.236702  978470 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0830 21:44:47.236713  978470 command_runner.go:130] > #   should be moved to the container's cgroup
	I0830 21:44:47.236722  978470 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0830 21:44:47.236732  978470 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0830 21:44:47.236742  978470 command_runner.go:130] > runtime_type = "oci"
	I0830 21:44:47.236752  978470 command_runner.go:130] > runtime_root = "/run/runc"
	I0830 21:44:47.236763  978470 command_runner.go:130] > runtime_config_path = ""
	I0830 21:44:47.236772  978470 command_runner.go:130] > monitor_path = ""
	I0830 21:44:47.236782  978470 command_runner.go:130] > monitor_cgroup = ""
	I0830 21:44:47.236791  978470 command_runner.go:130] > monitor_exec_cgroup = ""
	I0830 21:44:47.236801  978470 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0830 21:44:47.236809  978470 command_runner.go:130] > # running containers
	I0830 21:44:47.236818  978470 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0830 21:44:47.236832  978470 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0830 21:44:47.236920  978470 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0830 21:44:47.236939  978470 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0830 21:44:47.236949  978470 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0830 21:44:47.236959  978470 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0830 21:44:47.236970  978470 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0830 21:44:47.236977  978470 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0830 21:44:47.236984  978470 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0830 21:44:47.236995  978470 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0830 21:44:47.237009  978470 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0830 21:44:47.237021  978470 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0830 21:44:47.237034  978470 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0830 21:44:47.237050  978470 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0830 21:44:47.237062  978470 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0830 21:44:47.237073  978470 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0830 21:44:47.237092  978470 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0830 21:44:47.237109  978470 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0830 21:44:47.237122  978470 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0830 21:44:47.237138  978470 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0830 21:44:47.237146  978470 command_runner.go:130] > # Example:
	I0830 21:44:47.237155  978470 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0830 21:44:47.237167  978470 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0830 21:44:47.237179  978470 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0830 21:44:47.237191  978470 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0830 21:44:47.237200  978470 command_runner.go:130] > # cpuset = 0
	I0830 21:44:47.237210  978470 command_runner.go:130] > # cpushares = "0-1"
	I0830 21:44:47.237218  978470 command_runner.go:130] > # Where:
	I0830 21:44:47.237228  978470 command_runner.go:130] > # The workload name is workload-type.
	I0830 21:44:47.237238  978470 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0830 21:44:47.237249  978470 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0830 21:44:47.237263  978470 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0830 21:44:47.237280  978470 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0830 21:44:47.237293  978470 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0830 21:44:47.237301  978470 command_runner.go:130] > # 
	I0830 21:44:47.237347  978470 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0830 21:44:47.237358  978470 command_runner.go:130] > #
	I0830 21:44:47.237369  978470 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0830 21:44:47.237382  978470 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0830 21:44:47.237396  978470 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0830 21:44:47.237406  978470 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0830 21:44:47.237416  978470 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0830 21:44:47.237422  978470 command_runner.go:130] > [crio.image]
	I0830 21:44:47.237436  978470 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0830 21:44:47.237447  978470 command_runner.go:130] > # default_transport = "docker://"
	I0830 21:44:47.237458  978470 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0830 21:44:47.237471  978470 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0830 21:44:47.237481  978470 command_runner.go:130] > # global_auth_file = ""
	I0830 21:44:47.237488  978470 command_runner.go:130] > # The image used to instantiate infra containers.
	I0830 21:44:47.237498  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:44:47.237507  978470 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0830 21:44:47.237520  978470 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0830 21:44:47.237532  978470 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0830 21:44:47.237542  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:44:47.237552  978470 command_runner.go:130] > # pause_image_auth_file = ""
	I0830 21:44:47.237561  978470 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0830 21:44:47.237574  978470 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0830 21:44:47.237587  978470 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0830 21:44:47.237599  978470 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0830 21:44:47.237610  978470 command_runner.go:130] > # pause_command = "/pause"
	I0830 21:44:47.237620  978470 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0830 21:44:47.237634  978470 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0830 21:44:47.237644  978470 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0830 21:44:47.237656  978470 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0830 21:44:47.237668  978470 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0830 21:44:47.237677  978470 command_runner.go:130] > # signature_policy = ""
	I0830 21:44:47.237687  978470 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0830 21:44:47.237700  978470 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0830 21:44:47.237709  978470 command_runner.go:130] > # changing them here.
	I0830 21:44:47.237719  978470 command_runner.go:130] > # insecure_registries = [
	I0830 21:44:47.237728  978470 command_runner.go:130] > # ]
	I0830 21:44:47.237739  978470 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0830 21:44:47.237753  978470 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0830 21:44:47.237763  978470 command_runner.go:130] > # image_volumes = "mkdir"
	I0830 21:44:47.237774  978470 command_runner.go:130] > # Temporary directory to use for storing big files
	I0830 21:44:47.237781  978470 command_runner.go:130] > # big_files_temporary_dir = ""
	I0830 21:44:47.237787  978470 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0830 21:44:47.237793  978470 command_runner.go:130] > # CNI plugins.
	I0830 21:44:47.237797  978470 command_runner.go:130] > [crio.network]
	I0830 21:44:47.237805  978470 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0830 21:44:47.237810  978470 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0830 21:44:47.237817  978470 command_runner.go:130] > # cni_default_network = ""
	I0830 21:44:47.237822  978470 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0830 21:44:47.237829  978470 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0830 21:44:47.237835  978470 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0830 21:44:47.237839  978470 command_runner.go:130] > # plugin_dirs = [
	I0830 21:44:47.237845  978470 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0830 21:44:47.237848  978470 command_runner.go:130] > # ]
	I0830 21:44:47.237854  978470 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0830 21:44:47.237860  978470 command_runner.go:130] > [crio.metrics]
	I0830 21:44:47.237864  978470 command_runner.go:130] > # Globally enable or disable metrics support.
	I0830 21:44:47.237870  978470 command_runner.go:130] > enable_metrics = true
	I0830 21:44:47.237874  978470 command_runner.go:130] > # Specify enabled metrics collectors.
	I0830 21:44:47.237881  978470 command_runner.go:130] > # Per default all metrics are enabled.
	I0830 21:44:47.237887  978470 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0830 21:44:47.237895  978470 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0830 21:44:47.237901  978470 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0830 21:44:47.237907  978470 command_runner.go:130] > # metrics_collectors = [
	I0830 21:44:47.237911  978470 command_runner.go:130] > # 	"operations",
	I0830 21:44:47.237915  978470 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0830 21:44:47.237922  978470 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0830 21:44:47.237926  978470 command_runner.go:130] > # 	"operations_errors",
	I0830 21:44:47.237932  978470 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0830 21:44:47.237939  978470 command_runner.go:130] > # 	"image_pulls_by_name",
	I0830 21:44:47.237949  978470 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0830 21:44:47.237956  978470 command_runner.go:130] > # 	"image_pulls_failures",
	I0830 21:44:47.237966  978470 command_runner.go:130] > # 	"image_pulls_successes",
	I0830 21:44:47.237974  978470 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0830 21:44:47.237981  978470 command_runner.go:130] > # 	"image_layer_reuse",
	I0830 21:44:47.237987  978470 command_runner.go:130] > # 	"containers_oom_total",
	I0830 21:44:47.237991  978470 command_runner.go:130] > # 	"containers_oom",
	I0830 21:44:47.237995  978470 command_runner.go:130] > # 	"processes_defunct",
	I0830 21:44:47.238001  978470 command_runner.go:130] > # 	"operations_total",
	I0830 21:44:47.238006  978470 command_runner.go:130] > # 	"operations_latency_seconds",
	I0830 21:44:47.238011  978470 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0830 21:44:47.238016  978470 command_runner.go:130] > # 	"operations_errors_total",
	I0830 21:44:47.238023  978470 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0830 21:44:47.238027  978470 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0830 21:44:47.238031  978470 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0830 21:44:47.238035  978470 command_runner.go:130] > # 	"image_pulls_success_total",
	I0830 21:44:47.238042  978470 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0830 21:44:47.238046  978470 command_runner.go:130] > # 	"containers_oom_count_total",
	I0830 21:44:47.238052  978470 command_runner.go:130] > # ]
	I0830 21:44:47.238057  978470 command_runner.go:130] > # The port on which the metrics server will listen.
	I0830 21:44:47.238061  978470 command_runner.go:130] > # metrics_port = 9090
	I0830 21:44:47.238068  978470 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0830 21:44:47.238073  978470 command_runner.go:130] > # metrics_socket = ""
	I0830 21:44:47.238083  978470 command_runner.go:130] > # The certificate for the secure metrics server.
	I0830 21:44:47.238091  978470 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0830 21:44:47.238096  978470 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0830 21:44:47.238104  978470 command_runner.go:130] > # certificate on any modification event.
	I0830 21:44:47.238108  978470 command_runner.go:130] > # metrics_cert = ""
	I0830 21:44:47.238116  978470 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0830 21:44:47.238121  978470 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0830 21:44:47.238127  978470 command_runner.go:130] > # metrics_key = ""
	I0830 21:44:47.238132  978470 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0830 21:44:47.238138  978470 command_runner.go:130] > [crio.tracing]
	I0830 21:44:47.238143  978470 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0830 21:44:47.238150  978470 command_runner.go:130] > # enable_tracing = false
	I0830 21:44:47.238161  978470 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0830 21:44:47.238165  978470 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0830 21:44:47.238170  978470 command_runner.go:130] > # Number of samples to collect per million spans.
	I0830 21:44:47.238177  978470 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0830 21:44:47.238183  978470 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0830 21:44:47.238188  978470 command_runner.go:130] > [crio.stats]
	I0830 21:44:47.238194  978470 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0830 21:44:47.238201  978470 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0830 21:44:47.238206  978470 command_runner.go:130] > # stats_collection_period = 0
	I0830 21:44:47.238232  978470 command_runner.go:130] ! time="2023-08-30 21:44:47.220397681Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0830 21:44:47.238245  978470 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0830 21:44:47.238306  978470 cni.go:84] Creating CNI manager for ""
	I0830 21:44:47.238314  978470 cni.go:136] 3 nodes found, recommending kindnet
	I0830 21:44:47.238325  978470 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:44:47.238346  978470 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.46 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-752665 NodeName:multinode-752665-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 21:44:47.238466  978470 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-752665-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.46
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:44:47.238513  978470 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-752665-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 21:44:47.238561  978470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 21:44:47.247302  978470 command_runner.go:130] > kubeadm
	I0830 21:44:47.247324  978470 command_runner.go:130] > kubectl
	I0830 21:44:47.247330  978470 command_runner.go:130] > kubelet
	I0830 21:44:47.247380  978470 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 21:44:47.247432  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0830 21:44:47.255580  978470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0830 21:44:47.271108  978470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 21:44:47.287007  978470 ssh_runner.go:195] Run: grep 192.168.39.20	control-plane.minikube.internal$ /etc/hosts
	I0830 21:44:47.290633  978470 command_runner.go:130] > 192.168.39.20	control-plane.minikube.internal
	I0830 21:44:47.290963  978470 host.go:66] Checking if "multinode-752665" exists ...
	I0830 21:44:47.291289  978470 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:44:47.291302  978470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:44:47.291345  978470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:44:47.306523  978470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I0830 21:44:47.306916  978470 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:44:47.307372  978470 main.go:141] libmachine: Using API Version  1
	I0830 21:44:47.307394  978470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:44:47.307735  978470 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:44:47.307937  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:44:47.308098  978470 start.go:301] JoinCluster: &{Name:multinode-752665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.30 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:44:47.308262  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0830 21:44:47.308298  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:44:47.311044  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:44:47.311471  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:44:47.311503  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:44:47.311654  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:44:47.311840  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:44:47.311993  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:44:47.312117  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:44:47.487803  978470 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token kcvtza.zejo0kwd4jzl4orb --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 21:44:47.493226  978470 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0830 21:44:47.493272  978470 host.go:66] Checking if "multinode-752665" exists ...
	I0830 21:44:47.493577  978470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:44:47.493606  978470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:44:47.508798  978470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
	I0830 21:44:47.509231  978470 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:44:47.509757  978470 main.go:141] libmachine: Using API Version  1
	I0830 21:44:47.509776  978470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:44:47.510170  978470 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:44:47.510381  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:44:47.510580  978470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-752665-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0830 21:44:47.510605  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:44:47.513728  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:44:47.514224  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:44:47.514259  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:44:47.514418  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:44:47.514591  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:44:47.514708  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:44:47.514809  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:44:47.666766  978470 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0830 21:44:47.723534  978470 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-4q5fx, kube-system/kube-proxy-5twl5
	I0830 21:44:50.741552  978470 command_runner.go:130] > node/multinode-752665-m02 cordoned
	I0830 21:44:50.741578  978470 command_runner.go:130] > pod "busybox-5bc68d56bd-j4rx4" has DeletionTimestamp older than 1 seconds, skipping
	I0830 21:44:50.741584  978470 command_runner.go:130] > node/multinode-752665-m02 drained
	I0830 21:44:50.741607  978470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-752665-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.231005933s)
	I0830 21:44:50.741622  978470 node.go:108] successfully drained node "m02"
	I0830 21:44:50.742003  978470 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:44:50.742237  978470 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:44:50.742636  978470 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0830 21:44:50.742693  978470 round_trippers.go:463] DELETE https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:44:50.742700  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:50.742708  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:50.742714  978470 round_trippers.go:473]     Content-Type: application/json
	I0830 21:44:50.742723  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:50.756081  978470 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0830 21:44:50.756104  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:50.756111  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:50.756117  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:50.756124  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:50.756131  978470 round_trippers.go:580]     Content-Length: 171
	I0830 21:44:50.756139  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:50 GMT
	I0830 21:44:50.756148  978470 round_trippers.go:580]     Audit-Id: c26d9f55-7bbc-474d-b653-f48af411bf27
	I0830 21:44:50.756156  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:50.756193  978470 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-752665-m02","kind":"nodes","uid":"281f5c10-5eea-4a42-9ede-3f15a3bcd0d0"}}
	I0830 21:44:50.756241  978470 node.go:124] successfully deleted node "m02"
	I0830 21:44:50.756264  978470 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0830 21:44:50.756291  978470 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0830 21:44:50.756318  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kcvtza.zejo0kwd4jzl4orb --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-752665-m02"
	I0830 21:44:50.809694  978470 command_runner.go:130] > [preflight] Running pre-flight checks
	I0830 21:44:50.963950  978470 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0830 21:44:50.964017  978470 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0830 21:44:51.030826  978470 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 21:44:51.033778  978470 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 21:44:51.033796  978470 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0830 21:44:51.185357  978470 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0830 21:44:51.710536  978470 command_runner.go:130] > This node has joined the cluster:
	I0830 21:44:51.710560  978470 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0830 21:44:51.710567  978470 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0830 21:44:51.710579  978470 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0830 21:44:51.713475  978470 command_runner.go:130] ! W0830 21:44:50.801325    2623 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0830 21:44:51.713512  978470 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0830 21:44:51.713523  978470 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0830 21:44:51.713539  978470 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0830 21:44:51.713602  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0830 21:44:51.976374  978470 start.go:303] JoinCluster complete in 4.668270973s
	I0830 21:44:51.976413  978470 cni.go:84] Creating CNI manager for ""
	I0830 21:44:51.976420  978470 cni.go:136] 3 nodes found, recommending kindnet
	I0830 21:44:51.976471  978470 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 21:44:51.983420  978470 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0830 21:44:51.983449  978470 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0830 21:44:51.983460  978470 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0830 21:44:51.983469  978470 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 21:44:51.983479  978470 command_runner.go:130] > Access: 2023-08-30 21:42:23.592476286 +0000
	I0830 21:44:51.983488  978470 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0830 21:44:51.983493  978470 command_runner.go:130] > Change: 2023-08-30 21:42:21.726476286 +0000
	I0830 21:44:51.983506  978470 command_runner.go:130] >  Birth: -
	I0830 21:44:51.983560  978470 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 21:44:51.983573  978470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 21:44:52.001544  978470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 21:44:52.319282  978470 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0830 21:44:52.341923  978470 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0830 21:44:52.346672  978470 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0830 21:44:52.380719  978470 command_runner.go:130] > daemonset.apps/kindnet configured
	I0830 21:44:52.383920  978470 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:44:52.384280  978470 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:44:52.384739  978470 round_trippers.go:463] GET https://192.168.39.20:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 21:44:52.384759  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.384770  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.384782  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.388171  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:44:52.388203  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.388214  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.388223  978470 round_trippers.go:580]     Content-Length: 291
	I0830 21:44:52.388232  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.388241  978470 round_trippers.go:580]     Audit-Id: b4abd995-aa61-499e-81f1-8b0d979cd388
	I0830 21:44:52.388249  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.388259  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.388272  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.388301  978470 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4cda7228-5995-4a40-902e-7c8e87f8c72e","resourceVersion":"858","creationTimestamp":"2023-08-30T21:32:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0830 21:44:52.388419  978470 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-752665" context rescaled to 1 replicas
	I0830 21:44:52.388459  978470 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0830 21:44:52.390120  978470 out.go:177] * Verifying Kubernetes components...
	I0830 21:44:52.391434  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:44:52.406289  978470 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:44:52.406606  978470 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:44:52.406912  978470 node_ready.go:35] waiting up to 6m0s for node "multinode-752665-m02" to be "Ready" ...
	I0830 21:44:52.406982  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:44:52.407009  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.407019  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.407025  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.409459  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:52.409475  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.409483  978470 round_trippers.go:580]     Audit-Id: 2c1f5914-1cd3-44f5-9f1d-65659054cf57
	I0830 21:44:52.409489  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.409494  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.409502  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.409508  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.409516  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.409623  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"ff65bdbe-779a-4252-a23d-cbb7efdf27f9","resourceVersion":"1005","creationTimestamp":"2023-08-30T21:44:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:44:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:44:
51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations": [truncated 3561 chars]
	I0830 21:44:52.409943  978470 node_ready.go:49] node "multinode-752665-m02" has status "Ready":"True"
	I0830 21:44:52.409958  978470 node_ready.go:38] duration metric: took 3.028182ms waiting for node "multinode-752665-m02" to be "Ready" ...
	I0830 21:44:52.409968  978470 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:44:52.410040  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:44:52.410050  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.410061  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.410072  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.413978  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:44:52.413992  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.413999  978470 round_trippers.go:580]     Audit-Id: 5b17c92a-6eb0-46fc-a76d-b67242a4faa6
	I0830 21:44:52.414005  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.414010  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.414016  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.414021  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.414030  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.415565  978470 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1012"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"854","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82193 chars]
	I0830 21:44:52.418011  978470 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:52.418071  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:44:52.418079  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.418086  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.418093  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.420433  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:52.420453  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.420462  978470 round_trippers.go:580]     Audit-Id: af8b5b8b-9ace-4549-abc9-49da3376ef0f
	I0830 21:44:52.420471  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.420481  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.420492  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.420503  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.420514  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.420660  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"854","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0830 21:44:52.421134  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:44:52.421148  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.421155  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.421166  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.424274  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:44:52.424290  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.424300  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.424308  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.424317  978470 round_trippers.go:580]     Audit-Id: a79c982b-325a-47da-8633-8c056517b93e
	I0830 21:44:52.424330  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.424346  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.424355  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.424595  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:44:52.424880  978470 pod_ready.go:92] pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace has status "Ready":"True"
	I0830 21:44:52.424894  978470 pod_ready.go:81] duration metric: took 6.862528ms waiting for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:52.424905  978470 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:52.424956  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-752665
	I0830 21:44:52.424965  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.424976  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.424986  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.427235  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:52.427248  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.427254  978470 round_trippers.go:580]     Audit-Id: 4a1b651b-cfd7-45f7-bb01-51e61bf2a102
	I0830 21:44:52.427260  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.427265  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.427271  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.427280  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.427290  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.427591  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-752665","namespace":"kube-system","uid":"25e2609d-f391-4e71-823a-c4fe8625092d","resourceVersion":"830","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.20:2379","kubernetes.io/config.hash":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.mirror":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.seen":"2023-08-30T21:32:35.235892298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0830 21:44:52.427925  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:44:52.427937  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.427947  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.427955  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.430090  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:52.430104  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.430110  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.430116  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.430136  978470 round_trippers.go:580]     Audit-Id: d2bfaf1e-29ed-429a-9a92-a3145a7f03ef
	I0830 21:44:52.430145  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.430162  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.430173  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.430287  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:44:52.430621  978470 pod_ready.go:92] pod "etcd-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:44:52.430635  978470 pod_ready.go:81] duration metric: took 5.724355ms waiting for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:52.430660  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:52.430720  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-752665
	I0830 21:44:52.430730  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.430740  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.430749  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.432541  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:44:52.432558  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.432565  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.432573  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.432586  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.432594  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.432606  978470 round_trippers.go:580]     Audit-Id: 7690b524-d9f4-4c26-be6c-51773656a3d0
	I0830 21:44:52.432618  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.432864  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-752665","namespace":"kube-system","uid":"d813d11d-d0ec-4091-a72b-187bd44eabe3","resourceVersion":"844","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.20:8443","kubernetes.io/config.hash":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.mirror":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.seen":"2023-08-30T21:32:26.214498990Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0830 21:44:52.433195  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:44:52.433205  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.433212  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.433218  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.436575  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:44:52.436593  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.436600  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.436605  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.436613  978470 round_trippers.go:580]     Audit-Id: 70c1a5f8-69d1-4677-af88-442db1fe1bc7
	I0830 21:44:52.436622  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.436627  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.436633  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.437382  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:44:52.437635  978470 pod_ready.go:92] pod "kube-apiserver-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:44:52.437645  978470 pod_ready.go:81] duration metric: took 6.975947ms waiting for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:52.437653  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:52.437691  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-752665
	I0830 21:44:52.437698  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.437705  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.437711  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.442222  978470 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:44:52.442241  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.442249  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.442254  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.442262  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.442268  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.442276  978470 round_trippers.go:580]     Audit-Id: 8fe0395d-ea0e-48d1-971c-fa8f3c3e6a8d
	I0830 21:44:52.442281  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.442969  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-752665","namespace":"kube-system","uid":"0391b35f-5177-412c-b7d4-073efb2de36b","resourceVersion":"846","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.mirror":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.seen":"2023-08-30T21:32:26.214500244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0830 21:44:52.443306  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:44:52.443316  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.443323  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.443329  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.445506  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:52.445522  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.445528  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.445533  978470 round_trippers.go:580]     Audit-Id: 32c7c117-d25a-4b85-b9cc-150540ecc4bb
	I0830 21:44:52.445539  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.445553  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.445562  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.445575  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.445837  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:44:52.446093  978470 pod_ready.go:92] pod "kube-controller-manager-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:44:52.446104  978470 pod_ready.go:81] duration metric: took 8.446272ms waiting for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:52.446116  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5twl5" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:52.607638  978470 request.go:629] Waited for 161.439458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:44:52.607709  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:44:52.607714  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.607722  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.607729  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.611989  978470 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:44:52.612015  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.612024  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.612033  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.612040  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.612049  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.612059  978470 round_trippers.go:580]     Audit-Id: 0a407a41-2e91-4eb4-a482-9656861caba9
	I0830 21:44:52.612071  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.612229  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5twl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff4250a4-1482-42c0-a523-e97faf806c43","resourceVersion":"1010","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0830 21:44:52.807053  978470 request.go:629] Waited for 194.296367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:44:52.807140  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:44:52.807150  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:52.807159  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:52.807165  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:52.810499  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:44:52.810518  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:52.810528  978470 round_trippers.go:580]     Audit-Id: 128d22a4-f558-4b7c-9005-e584add76e04
	I0830 21:44:52.810537  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:52.810554  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:52.810567  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:52.810579  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:52.810588  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:52 GMT
	I0830 21:44:52.810747  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"ff65bdbe-779a-4252-a23d-cbb7efdf27f9","resourceVersion":"1005","creationTimestamp":"2023-08-30T21:44:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:44:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:44:
51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations": [truncated 3561 chars]
	I0830 21:44:53.007685  978470 request.go:629] Waited for 196.604894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:44:53.007748  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:44:53.007752  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:53.007760  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:53.007766  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:53.010335  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:53.010355  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:53.010362  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:53.010368  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:53 GMT
	I0830 21:44:53.010376  978470 round_trippers.go:580]     Audit-Id: 938212df-0fc9-47b2-89b6-b62d3d44ffef
	I0830 21:44:53.010384  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:53.010393  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:53.010402  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:53.010597  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5twl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff4250a4-1482-42c0-a523-e97faf806c43","resourceVersion":"1010","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0830 21:44:53.207378  978470 request.go:629] Waited for 196.176348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:44:53.207477  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:44:53.207488  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:53.207498  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:53.207508  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:53.210603  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:44:53.210624  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:53.210631  978470 round_trippers.go:580]     Audit-Id: 9fd1e8e6-bef9-4170-b37f-e7794221cc1b
	I0830 21:44:53.210637  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:53.210642  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:53.210647  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:53.210652  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:53.210658  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:53 GMT
	I0830 21:44:53.211213  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"ff65bdbe-779a-4252-a23d-cbb7efdf27f9","resourceVersion":"1005","creationTimestamp":"2023-08-30T21:44:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:44:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:44:
51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations": [truncated 3561 chars]
	I0830 21:44:53.712316  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:44:53.712342  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:53.712351  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:53.712357  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:53.715524  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:44:53.715554  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:53.715565  978470 round_trippers.go:580]     Audit-Id: c4ad38f0-15f4-41bf-a59f-feee379f0cfa
	I0830 21:44:53.715573  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:53.715581  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:53.715589  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:53.715598  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:53.715606  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:53 GMT
	I0830 21:44:53.715908  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5twl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff4250a4-1482-42c0-a523-e97faf806c43","resourceVersion":"1021","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0830 21:44:53.716401  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:44:53.716416  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:53.716423  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:53.716430  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:53.719049  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:53.719073  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:53.719088  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:53 GMT
	I0830 21:44:53.719096  978470 round_trippers.go:580]     Audit-Id: 2b65e507-fb9b-4fdf-aeba-938eb03e7e34
	I0830 21:44:53.719104  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:53.719113  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:53.719124  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:53.719132  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:53.719324  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"ff65bdbe-779a-4252-a23d-cbb7efdf27f9","resourceVersion":"1005","creationTimestamp":"2023-08-30T21:44:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:44:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:44:
51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations": [truncated 3561 chars]
	I0830 21:44:53.719674  978470 pod_ready.go:92] pod "kube-proxy-5twl5" in "kube-system" namespace has status "Ready":"True"
	I0830 21:44:53.719693  978470 pod_ready.go:81] duration metric: took 1.273566033s waiting for pod "kube-proxy-5twl5" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:53.719705  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jwftn" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:53.808019  978470 request.go:629] Waited for 88.242945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwftn
	I0830 21:44:53.808084  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwftn
	I0830 21:44:53.808088  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:53.808096  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:53.808103  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:53.811100  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:53.811118  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:53.811125  978470 round_trippers.go:580]     Audit-Id: 5138fbe8-f318-49d3-9da4-2e7dae54c47d
	I0830 21:44:53.811130  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:53.811137  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:53.811143  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:53.811148  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:53.811165  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:53 GMT
	I0830 21:44:53.811294  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jwftn","generateName":"kube-proxy-","namespace":"kube-system","uid":"bfc888c8-7790-4267-a1fc-cab9448e097b","resourceVersion":"675","creationTimestamp":"2023-08-30T21:34:21Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:34:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0830 21:44:54.007023  978470 request.go:629] Waited for 195.293264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m03
	I0830 21:44:54.007100  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m03
	I0830 21:44:54.007105  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:54.007112  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:54.007119  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:54.010202  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:44:54.010226  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:54.010233  978470 round_trippers.go:580]     Audit-Id: f7ac53f4-d9e3-403c-812f-ba43fe6e817d
	I0830 21:44:54.010239  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:54.010244  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:54.010251  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:54.010259  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:54.010267  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:54 GMT
	I0830 21:44:54.010886  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m03","uid":"2c7759fc-7c08-4ea2-b0c4-b56d98a23e6f","resourceVersion":"748","creationTimestamp":"2023-08-30T21:35:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:35:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3412 chars]
	I0830 21:44:54.011178  978470 pod_ready.go:92] pod "kube-proxy-jwftn" in "kube-system" namespace has status "Ready":"True"
	I0830 21:44:54.011192  978470 pod_ready.go:81] duration metric: took 291.479237ms waiting for pod "kube-proxy-jwftn" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:54.011201  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:54.207630  978470 request.go:629] Waited for 196.35639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:44:54.207711  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:44:54.207716  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:54.207725  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:54.207731  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:54.212549  978470 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:44:54.212570  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:54.212578  978470 round_trippers.go:580]     Audit-Id: a77b0621-ea06-4b41-a44e-ad65a11ebb7f
	I0830 21:44:54.212584  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:54.212589  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:54.212594  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:54.212600  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:54.212606  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:54 GMT
	I0830 21:44:54.212849  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vltx5","generateName":"kube-proxy-","namespace":"kube-system","uid":"24ee271e-5778-4d0c-ab2c-77426f2673b3","resourceVersion":"752","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0830 21:44:54.407732  978470 request.go:629] Waited for 194.374178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:44:54.407848  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:44:54.407861  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:54.407872  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:54.407887  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:54.410850  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:54.410875  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:54.410885  978470 round_trippers.go:580]     Audit-Id: e2c54494-86fa-4e6f-a189-2daba5d67fea
	I0830 21:44:54.410893  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:54.410901  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:54.410909  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:54.410919  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:54.410927  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:54 GMT
	I0830 21:44:54.411190  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:44:54.411585  978470 pod_ready.go:92] pod "kube-proxy-vltx5" in "kube-system" namespace has status "Ready":"True"
	I0830 21:44:54.411606  978470 pod_ready.go:81] duration metric: took 400.397816ms waiting for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:54.411619  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:54.608058  978470 request.go:629] Waited for 196.353344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:44:54.608135  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:44:54.608139  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:54.608147  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:54.608154  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:54.611016  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:54.611032  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:54.611040  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:54.611048  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:54.611055  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:54.611063  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:54 GMT
	I0830 21:44:54.611072  978470 round_trippers.go:580]     Audit-Id: 1f1b20c7-2891-4aee-a4f8-d6b86fc8c80a
	I0830 21:44:54.611079  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:54.611479  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-752665","namespace":"kube-system","uid":"4c8a6a98-51b6-4010-9519-add75ab1a7a9","resourceVersion":"842","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.mirror":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.seen":"2023-08-30T21:32:35.235897289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0830 21:44:54.807144  978470 request.go:629] Waited for 195.270413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:44:54.807228  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:44:54.807237  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:54.807248  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:54.807273  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:54.810083  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:54.810111  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:54.810123  978470 round_trippers.go:580]     Audit-Id: 201e7ad8-7d4d-412e-be5f-be91ac2734d7
	I0830 21:44:54.810131  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:54.810136  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:54.810142  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:54.810148  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:54.810154  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:54 GMT
	I0830 21:44:54.810565  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:44:54.810887  978470 pod_ready.go:92] pod "kube-scheduler-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:44:54.810902  978470 pod_ready.go:81] duration metric: took 399.274476ms waiting for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:44:54.810912  978470 pod_ready.go:38] duration metric: took 2.400933446s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:44:54.810928  978470 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 21:44:54.810974  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:44:54.825676  978470 system_svc.go:56] duration metric: took 14.740909ms WaitForService to wait for kubelet.
	I0830 21:44:54.825697  978470 kubeadm.go:581] duration metric: took 2.437212617s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 21:44:54.825715  978470 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:44:55.007084  978470 request.go:629] Waited for 181.284047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes
	I0830 21:44:55.007159  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes
	I0830 21:44:55.007167  978470 round_trippers.go:469] Request Headers:
	I0830 21:44:55.007178  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:44:55.007187  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:44:55.009307  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:44:55.009332  978470 round_trippers.go:577] Response Headers:
	I0830 21:44:55.009343  978470 round_trippers.go:580]     Audit-Id: 28f686f0-48c7-422d-9410-f9e5a4a3477d
	I0830 21:44:55.009351  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:44:55.009359  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:44:55.009372  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:44:55.009380  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:44:55.009388  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:44:55 GMT
	I0830 21:44:55.009973  978470 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1025"},"items":[{"metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15223 chars]
	I0830 21:44:55.010589  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:44:55.010610  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:44:55.010619  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:44:55.010623  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:44:55.010626  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:44:55.010630  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:44:55.010634  978470 node_conditions.go:105] duration metric: took 184.914792ms to run NodePressure ...
	I0830 21:44:55.010647  978470 start.go:228] waiting for startup goroutines ...
	I0830 21:44:55.010668  978470 start.go:242] writing updated cluster config ...
	I0830 21:44:55.011093  978470 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:44:55.011231  978470 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:44:55.013548  978470 out.go:177] * Starting worker node multinode-752665-m03 in cluster multinode-752665
	I0830 21:44:55.015160  978470 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:44:55.015184  978470 cache.go:57] Caching tarball of preloaded images
	I0830 21:44:55.015267  978470 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 21:44:55.015278  978470 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 21:44:55.015379  978470 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/config.json ...
	I0830 21:44:55.015531  978470 start.go:365] acquiring machines lock for multinode-752665-m03: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 21:44:55.015568  978470 start.go:369] acquired machines lock for "multinode-752665-m03" in 21.087µs
	I0830 21:44:55.015582  978470 start.go:96] Skipping create...Using existing machine configuration
	I0830 21:44:55.015589  978470 fix.go:54] fixHost starting: m03
	I0830 21:44:55.015863  978470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:44:55.015886  978470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:44:55.030589  978470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0830 21:44:55.031007  978470 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:44:55.031480  978470 main.go:141] libmachine: Using API Version  1
	I0830 21:44:55.031505  978470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:44:55.031841  978470 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:44:55.032033  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .DriverName
	I0830 21:44:55.032187  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetState
	I0830 21:44:55.033571  978470 fix.go:102] recreateIfNeeded on multinode-752665-m03: state=Running err=<nil>
	W0830 21:44:55.033589  978470 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 21:44:55.035685  978470 out.go:177] * Updating the running kvm2 "multinode-752665-m03" VM ...
	I0830 21:44:55.037071  978470 machine.go:88] provisioning docker machine ...
	I0830 21:44:55.037087  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .DriverName
	I0830 21:44:55.037300  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetMachineName
	I0830 21:44:55.037461  978470 buildroot.go:166] provisioning hostname "multinode-752665-m03"
	I0830 21:44:55.037483  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetMachineName
	I0830 21:44:55.037634  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHHostname
	I0830 21:44:55.040034  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.040396  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:44:55.040425  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.040537  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHPort
	I0830 21:44:55.040714  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:44:55.040864  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:44:55.041027  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHUsername
	I0830 21:44:55.041185  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:44:55.041630  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0830 21:44:55.041644  978470 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-752665-m03 && echo "multinode-752665-m03" | sudo tee /etc/hostname
	I0830 21:44:55.183389  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-752665-m03
	
	I0830 21:44:55.183423  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHHostname
	I0830 21:44:55.186158  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.186577  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:44:55.186615  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.186821  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHPort
	I0830 21:44:55.187008  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:44:55.187183  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:44:55.187310  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHUsername
	I0830 21:44:55.187498  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:44:55.188013  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0830 21:44:55.188032  978470 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-752665-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-752665-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-752665-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:44:55.317070  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:44:55.317098  978470 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 21:44:55.317116  978470 buildroot.go:174] setting up certificates
	I0830 21:44:55.317127  978470 provision.go:83] configureAuth start
	I0830 21:44:55.317138  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetMachineName
	I0830 21:44:55.317467  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetIP
	I0830 21:44:55.320081  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.320451  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:44:55.320474  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.320655  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHHostname
	I0830 21:44:55.323019  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.323313  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:44:55.323347  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.323446  978470 provision.go:138] copyHostCerts
	I0830 21:44:55.323478  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:44:55.323518  978470 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 21:44:55.323530  978470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:44:55.323606  978470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 21:44:55.323700  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:44:55.323724  978470 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 21:44:55.323731  978470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:44:55.323786  978470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 21:44:55.323853  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:44:55.323878  978470 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 21:44:55.323887  978470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:44:55.323921  978470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 21:44:55.323985  978470 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.multinode-752665-m03 san=[192.168.39.30 192.168.39.30 localhost 127.0.0.1 minikube multinode-752665-m03]
	I0830 21:44:55.422818  978470 provision.go:172] copyRemoteCerts
	I0830 21:44:55.422887  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:44:55.422921  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHHostname
	I0830 21:44:55.425615  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.425979  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:44:55.426014  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.426145  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHPort
	I0830 21:44:55.426351  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:44:55.426547  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHUsername
	I0830 21:44:55.426678  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m03/id_rsa Username:docker}
	I0830 21:44:55.520864  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 21:44:55.520938  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0830 21:44:55.545402  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 21:44:55.545493  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 21:44:55.568967  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 21:44:55.569033  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 21:44:55.591415  978470 provision.go:86] duration metric: configureAuth took 274.272909ms
	I0830 21:44:55.591447  978470 buildroot.go:189] setting minikube options for container-runtime
	I0830 21:44:55.591719  978470 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:44:55.591840  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHHostname
	I0830 21:44:55.594489  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.594821  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:44:55.594854  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:44:55.595066  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHPort
	I0830 21:44:55.595295  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:44:55.595450  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:44:55.595592  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHUsername
	I0830 21:44:55.595824  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:44:55.596253  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0830 21:44:55.596279  978470 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:46:26.234918  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:46:26.234969  978470 machine.go:91] provisioned docker machine in 1m31.197883174s
	I0830 21:46:26.234985  978470 start.go:300] post-start starting for "multinode-752665-m03" (driver="kvm2")
	I0830 21:46:26.235002  978470 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:46:26.235045  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .DriverName
	I0830 21:46:26.235421  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:46:26.235463  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHHostname
	I0830 21:46:26.238501  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:26.238873  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:46:26.238902  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:26.239091  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHPort
	I0830 21:46:26.239298  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:46:26.239437  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHUsername
	I0830 21:46:26.239588  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m03/id_rsa Username:docker}
	I0830 21:46:26.334148  978470 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:46:26.338162  978470 command_runner.go:130] > NAME=Buildroot
	I0830 21:46:26.338189  978470 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0830 21:46:26.338195  978470 command_runner.go:130] > ID=buildroot
	I0830 21:46:26.338202  978470 command_runner.go:130] > VERSION_ID=2021.02.12
	I0830 21:46:26.338209  978470 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0830 21:46:26.338293  978470 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 21:46:26.338318  978470 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 21:46:26.338436  978470 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 21:46:26.338531  978470 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 21:46:26.338543  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /etc/ssl/certs/9626212.pem
	I0830 21:46:26.338632  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 21:46:26.347499  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:46:26.370237  978470 start.go:303] post-start completed in 135.232755ms
	I0830 21:46:26.370261  978470 fix.go:56] fixHost completed within 1m31.354672121s
	I0830 21:46:26.370285  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHHostname
	I0830 21:46:26.373158  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:26.373574  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:46:26.373608  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:26.373806  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHPort
	I0830 21:46:26.374012  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:46:26.374168  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:46:26.374280  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHUsername
	I0830 21:46:26.374435  978470 main.go:141] libmachine: Using SSH client type: native
	I0830 21:46:26.375038  978470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0830 21:46:26.375056  978470 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 21:46:26.521246  978470 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693431986.513276543
	
	I0830 21:46:26.521276  978470 fix.go:206] guest clock: 1693431986.513276543
	I0830 21:46:26.521285  978470 fix.go:219] Guest: 2023-08-30 21:46:26.513276543 +0000 UTC Remote: 2023-08-30 21:46:26.370265257 +0000 UTC m=+553.571876834 (delta=143.011286ms)
	I0830 21:46:26.521305  978470 fix.go:190] guest clock delta is within tolerance: 143.011286ms
	I0830 21:46:26.521312  978470 start.go:83] releasing machines lock for "multinode-752665-m03", held for 1m31.50573415s
	I0830 21:46:26.521340  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .DriverName
	I0830 21:46:26.521660  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetIP
	I0830 21:46:26.524450  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:26.524908  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:46:26.524953  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:26.527070  978470 out.go:177] * Found network options:
	I0830 21:46:26.528642  978470 out.go:177]   - NO_PROXY=192.168.39.20,192.168.39.46
	W0830 21:46:26.530130  978470 proxy.go:119] fail to check proxy env: Error ip not in block
	W0830 21:46:26.530154  978470 proxy.go:119] fail to check proxy env: Error ip not in block
	I0830 21:46:26.530171  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .DriverName
	I0830 21:46:26.530941  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .DriverName
	I0830 21:46:26.531189  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .DriverName
	I0830 21:46:26.531309  978470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:46:26.531353  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHHostname
	W0830 21:46:26.531385  978470 proxy.go:119] fail to check proxy env: Error ip not in block
	W0830 21:46:26.531405  978470 proxy.go:119] fail to check proxy env: Error ip not in block
	I0830 21:46:26.531500  978470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:46:26.531525  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHHostname
	I0830 21:46:26.534270  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:26.534586  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:26.534643  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:46:26.534671  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:26.534779  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHPort
	I0830 21:46:26.534970  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:46:26.535072  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:46:26.535109  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHUsername
	I0830 21:46:26.535110  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:26.535266  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m03/id_rsa Username:docker}
	I0830 21:46:26.535308  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHPort
	I0830 21:46:26.535447  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHKeyPath
	I0830 21:46:26.535562  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetSSHUsername
	I0830 21:46:26.535737  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m03/id_rsa Username:docker}
	I0830 21:46:26.651232  978470 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0830 21:46:26.780858  978470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 21:46:26.786871  978470 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0830 21:46:26.787145  978470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 21:46:26.787232  978470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:46:26.795516  978470 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0830 21:46:26.795539  978470 start.go:466] detecting cgroup driver to use...
	I0830 21:46:26.795608  978470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:46:26.809272  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:46:26.821978  978470 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:46:26.822043  978470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:46:26.836554  978470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:46:26.849015  978470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:46:27.003274  978470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:46:27.140419  978470 docker.go:212] disabling docker service ...
	I0830 21:46:27.140508  978470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:46:27.155294  978470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:46:27.168125  978470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:46:27.297980  978470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:46:27.422848  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:46:27.435302  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:46:27.452776  978470 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0830 21:46:27.452820  978470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 21:46:27.452879  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:46:27.463156  978470 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:46:27.463218  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:46:27.472475  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:46:27.481897  978470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:46:27.490651  978470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:46:27.500131  978470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:46:27.508506  978470 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0830 21:46:27.508571  978470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:46:27.516604  978470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:46:27.631543  978470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:46:29.722260  978470 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.090665281s)
	I0830 21:46:29.722317  978470 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:46:29.722386  978470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:46:29.727406  978470 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0830 21:46:29.727432  978470 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0830 21:46:29.727449  978470 command_runner.go:130] > Device: 16h/22d	Inode: 1222        Links: 1
	I0830 21:46:29.727463  978470 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 21:46:29.727471  978470 command_runner.go:130] > Access: 2023-08-30 21:46:29.620628770 +0000
	I0830 21:46:29.727485  978470 command_runner.go:130] > Modify: 2023-08-30 21:46:29.620628770 +0000
	I0830 21:46:29.727496  978470 command_runner.go:130] > Change: 2023-08-30 21:46:29.620628770 +0000
	I0830 21:46:29.727509  978470 command_runner.go:130] >  Birth: -
	I0830 21:46:29.727580  978470 start.go:534] Will wait 60s for crictl version
	I0830 21:46:29.727647  978470 ssh_runner.go:195] Run: which crictl
	I0830 21:46:29.731464  978470 command_runner.go:130] > /usr/bin/crictl
	I0830 21:46:29.731603  978470 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:46:29.768375  978470 command_runner.go:130] > Version:  0.1.0
	I0830 21:46:29.768396  978470 command_runner.go:130] > RuntimeName:  cri-o
	I0830 21:46:29.768401  978470 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0830 21:46:29.768406  978470 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0830 21:46:29.768423  978470 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 21:46:29.768497  978470 ssh_runner.go:195] Run: crio --version
	I0830 21:46:29.816609  978470 command_runner.go:130] > crio version 1.24.1
	I0830 21:46:29.816640  978470 command_runner.go:130] > Version:          1.24.1
	I0830 21:46:29.816652  978470 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0830 21:46:29.816659  978470 command_runner.go:130] > GitTreeState:     dirty
	I0830 21:46:29.816668  978470 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0830 21:46:29.816675  978470 command_runner.go:130] > GoVersion:        go1.19.9
	I0830 21:46:29.816681  978470 command_runner.go:130] > Compiler:         gc
	I0830 21:46:29.816689  978470 command_runner.go:130] > Platform:         linux/amd64
	I0830 21:46:29.816697  978470 command_runner.go:130] > Linkmode:         dynamic
	I0830 21:46:29.816718  978470 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 21:46:29.816724  978470 command_runner.go:130] > SeccompEnabled:   true
	I0830 21:46:29.816731  978470 command_runner.go:130] > AppArmorEnabled:  false
	I0830 21:46:29.816827  978470 ssh_runner.go:195] Run: crio --version
	I0830 21:46:29.858181  978470 command_runner.go:130] > crio version 1.24.1
	I0830 21:46:29.858209  978470 command_runner.go:130] > Version:          1.24.1
	I0830 21:46:29.858219  978470 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0830 21:46:29.858226  978470 command_runner.go:130] > GitTreeState:     dirty
	I0830 21:46:29.858234  978470 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0830 21:46:29.858242  978470 command_runner.go:130] > GoVersion:        go1.19.9
	I0830 21:46:29.858248  978470 command_runner.go:130] > Compiler:         gc
	I0830 21:46:29.858255  978470 command_runner.go:130] > Platform:         linux/amd64
	I0830 21:46:29.858263  978470 command_runner.go:130] > Linkmode:         dynamic
	I0830 21:46:29.858279  978470 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 21:46:29.858285  978470 command_runner.go:130] > SeccompEnabled:   true
	I0830 21:46:29.858292  978470 command_runner.go:130] > AppArmorEnabled:  false
	I0830 21:46:29.861485  978470 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 21:46:29.862984  978470 out.go:177]   - env NO_PROXY=192.168.39.20
	I0830 21:46:29.864394  978470 out.go:177]   - env NO_PROXY=192.168.39.20,192.168.39.46
	I0830 21:46:29.865722  978470 main.go:141] libmachine: (multinode-752665-m03) Calling .GetIP
	I0830 21:46:29.868310  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:29.868750  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9b:a2", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:83:9b:a2 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-752665-m03 Clientid:01:52:54:00:83:9b:a2}
	I0830 21:46:29.868782  978470 main.go:141] libmachine: (multinode-752665-m03) DBG | domain multinode-752665-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:83:9b:a2 in network mk-multinode-752665
	I0830 21:46:29.868948  978470 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 21:46:29.873383  978470 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0830 21:46:29.873436  978470 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665 for IP: 192.168.39.30
	I0830 21:46:29.873458  978470 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:46:29.873630  978470 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 21:46:29.873688  978470 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 21:46:29.873708  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 21:46:29.873729  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 21:46:29.873745  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 21:46:29.873766  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 21:46:29.873827  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 21:46:29.873858  978470 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 21:46:29.873869  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 21:46:29.873896  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 21:46:29.873919  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:46:29.873943  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 21:46:29.873983  978470 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:46:29.874014  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> /usr/share/ca-certificates/9626212.pem
	I0830 21:46:29.874029  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:46:29.874043  978470 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem -> /usr/share/ca-certificates/962621.pem
	I0830 21:46:29.874413  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:46:29.899029  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:46:29.920988  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:46:29.943152  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 21:46:29.966754  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 21:46:29.989375  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:46:30.011505  978470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 21:46:30.034014  978470 ssh_runner.go:195] Run: openssl version
	I0830 21:46:30.039465  978470 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0830 21:46:30.039809  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 21:46:30.049465  978470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 21:46:30.054442  978470 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:46:30.054469  978470 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:46:30.054515  978470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 21:46:30.060062  978470 command_runner.go:130] > 3ec20f2e
	I0830 21:46:30.060141  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 21:46:30.068408  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:46:30.077994  978470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:46:30.082558  978470 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:46:30.082780  978470 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:46:30.082844  978470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:46:30.088601  978470 command_runner.go:130] > b5213941
	I0830 21:46:30.088960  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:46:30.097317  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 21:46:30.107210  978470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 21:46:30.112145  978470 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:46:30.112263  978470 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:46:30.112317  978470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 21:46:30.117955  978470 command_runner.go:130] > 51391683
	I0830 21:46:30.118041  978470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 21:46:30.126528  978470 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:46:30.131691  978470 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:46:30.131729  978470 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:46:30.131843  978470 ssh_runner.go:195] Run: crio config
	I0830 21:46:30.191342  978470 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0830 21:46:30.191384  978470 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0830 21:46:30.191395  978470 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0830 21:46:30.191401  978470 command_runner.go:130] > #
	I0830 21:46:30.191412  978470 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0830 21:46:30.191422  978470 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0830 21:46:30.191432  978470 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0830 21:46:30.191452  978470 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0830 21:46:30.191462  978470 command_runner.go:130] > # reload'.
	I0830 21:46:30.191478  978470 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0830 21:46:30.191492  978470 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0830 21:46:30.191506  978470 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0830 21:46:30.191520  978470 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0830 21:46:30.191529  978470 command_runner.go:130] > [crio]
	I0830 21:46:30.191541  978470 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0830 21:46:30.191553  978470 command_runner.go:130] > # containers images, in this directory.
	I0830 21:46:30.191565  978470 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0830 21:46:30.191584  978470 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0830 21:46:30.191596  978470 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0830 21:46:30.191612  978470 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0830 21:46:30.191625  978470 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0830 21:46:30.191636  978470 command_runner.go:130] > storage_driver = "overlay"
	I0830 21:46:30.191648  978470 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0830 21:46:30.191661  978470 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0830 21:46:30.191671  978470 command_runner.go:130] > storage_option = [
	I0830 21:46:30.191682  978470 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0830 21:46:30.191691  978470 command_runner.go:130] > ]
	I0830 21:46:30.191704  978470 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0830 21:46:30.191718  978470 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0830 21:46:30.191729  978470 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0830 21:46:30.191743  978470 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0830 21:46:30.191757  978470 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0830 21:46:30.191781  978470 command_runner.go:130] > # always happen on a node reboot
	I0830 21:46:30.191793  978470 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0830 21:46:30.191803  978470 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0830 21:46:30.191815  978470 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0830 21:46:30.191837  978470 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0830 21:46:30.191888  978470 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0830 21:46:30.191906  978470 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0830 21:46:30.191923  978470 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0830 21:46:30.191932  978470 command_runner.go:130] > # internal_wipe = true
	I0830 21:46:30.191944  978470 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0830 21:46:30.191954  978470 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0830 21:46:30.191980  978470 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0830 21:46:30.191993  978470 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0830 21:46:30.192007  978470 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0830 21:46:30.192017  978470 command_runner.go:130] > [crio.api]
	I0830 21:46:30.192031  978470 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0830 21:46:30.192042  978470 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0830 21:46:30.192056  978470 command_runner.go:130] > # IP address on which the stream server will listen.
	I0830 21:46:30.192068  978470 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0830 21:46:30.192084  978470 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0830 21:46:30.192099  978470 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0830 21:46:30.192168  978470 command_runner.go:130] > # stream_port = "0"
	I0830 21:46:30.192185  978470 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0830 21:46:30.192190  978470 command_runner.go:130] > # stream_enable_tls = false
	I0830 21:46:30.192196  978470 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0830 21:46:30.192200  978470 command_runner.go:130] > # stream_idle_timeout = ""
	I0830 21:46:30.192211  978470 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0830 21:46:30.192226  978470 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0830 21:46:30.192237  978470 command_runner.go:130] > # minutes.
	I0830 21:46:30.192245  978470 command_runner.go:130] > # stream_tls_cert = ""
	I0830 21:46:30.192251  978470 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0830 21:46:30.192257  978470 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0830 21:46:30.192262  978470 command_runner.go:130] > # stream_tls_key = ""
	I0830 21:46:30.192268  978470 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0830 21:46:30.192274  978470 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0830 21:46:30.192280  978470 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0830 21:46:30.192285  978470 command_runner.go:130] > # stream_tls_ca = ""
	I0830 21:46:30.192294  978470 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 21:46:30.192304  978470 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0830 21:46:30.192317  978470 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 21:46:30.192329  978470 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0830 21:46:30.192348  978470 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0830 21:46:30.192358  978470 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0830 21:46:30.192367  978470 command_runner.go:130] > [crio.runtime]
	I0830 21:46:30.192373  978470 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0830 21:46:30.192380  978470 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0830 21:46:30.192384  978470 command_runner.go:130] > # "nofile=1024:2048"
	I0830 21:46:30.192390  978470 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0830 21:46:30.192394  978470 command_runner.go:130] > # default_ulimits = [
	I0830 21:46:30.192397  978470 command_runner.go:130] > # ]
	I0830 21:46:30.192403  978470 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0830 21:46:30.192407  978470 command_runner.go:130] > # no_pivot = false
	I0830 21:46:30.192413  978470 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0830 21:46:30.192423  978470 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0830 21:46:30.192432  978470 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0830 21:46:30.192440  978470 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0830 21:46:30.192449  978470 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0830 21:46:30.192462  978470 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 21:46:30.192473  978470 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0830 21:46:30.192487  978470 command_runner.go:130] > # Cgroup setting for conmon
	I0830 21:46:30.192499  978470 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0830 21:46:30.192509  978470 command_runner.go:130] > conmon_cgroup = "pod"
	I0830 21:46:30.192519  978470 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0830 21:46:30.192531  978470 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0830 21:46:30.192543  978470 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 21:46:30.192555  978470 command_runner.go:130] > conmon_env = [
	I0830 21:46:30.192567  978470 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0830 21:46:30.192575  978470 command_runner.go:130] > ]
	I0830 21:46:30.192584  978470 command_runner.go:130] > # Additional environment variables to set for all the
	I0830 21:46:30.192596  978470 command_runner.go:130] > # containers. These are overridden if set in the
	I0830 21:46:30.192606  978470 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0830 21:46:30.192615  978470 command_runner.go:130] > # default_env = [
	I0830 21:46:30.192620  978470 command_runner.go:130] > # ]
	I0830 21:46:30.192629  978470 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0830 21:46:30.192633  978470 command_runner.go:130] > # selinux = false
	I0830 21:46:30.192644  978470 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0830 21:46:30.192650  978470 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0830 21:46:30.192658  978470 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0830 21:46:30.192662  978470 command_runner.go:130] > # seccomp_profile = ""
	I0830 21:46:30.192673  978470 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0830 21:46:30.192681  978470 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0830 21:46:30.192727  978470 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0830 21:46:30.192735  978470 command_runner.go:130] > # which might increase security.
	I0830 21:46:30.192740  978470 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0830 21:46:30.192746  978470 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0830 21:46:30.192752  978470 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0830 21:46:30.192758  978470 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0830 21:46:30.192764  978470 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0830 21:46:30.192771  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:46:30.192775  978470 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0830 21:46:30.192781  978470 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0830 21:46:30.192786  978470 command_runner.go:130] > # the cgroup blockio controller.
	I0830 21:46:30.192791  978470 command_runner.go:130] > # blockio_config_file = ""
	I0830 21:46:30.192799  978470 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0830 21:46:30.192804  978470 command_runner.go:130] > # irqbalance daemon.
	I0830 21:46:30.192812  978470 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0830 21:46:30.192826  978470 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0830 21:46:30.192838  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:46:30.192844  978470 command_runner.go:130] > # rdt_config_file = ""
	I0830 21:46:30.192854  978470 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0830 21:46:30.192861  978470 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0830 21:46:30.192875  978470 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0830 21:46:30.192886  978470 command_runner.go:130] > # separate_pull_cgroup = ""
	I0830 21:46:30.192898  978470 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0830 21:46:30.192913  978470 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0830 21:46:30.192923  978470 command_runner.go:130] > # will be added.
	I0830 21:46:30.192930  978470 command_runner.go:130] > # default_capabilities = [
	I0830 21:46:30.192938  978470 command_runner.go:130] > # 	"CHOWN",
	I0830 21:46:30.192942  978470 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0830 21:46:30.192946  978470 command_runner.go:130] > # 	"FSETID",
	I0830 21:46:30.192949  978470 command_runner.go:130] > # 	"FOWNER",
	I0830 21:46:30.192955  978470 command_runner.go:130] > # 	"SETGID",
	I0830 21:46:30.192963  978470 command_runner.go:130] > # 	"SETUID",
	I0830 21:46:30.192970  978470 command_runner.go:130] > # 	"SETPCAP",
	I0830 21:46:30.192978  978470 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0830 21:46:30.192988  978470 command_runner.go:130] > # 	"KILL",
	I0830 21:46:30.192994  978470 command_runner.go:130] > # ]
	I0830 21:46:30.193008  978470 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0830 21:46:30.193021  978470 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 21:46:30.193030  978470 command_runner.go:130] > # default_sysctls = [
	I0830 21:46:30.193035  978470 command_runner.go:130] > # ]
	I0830 21:46:30.193046  978470 command_runner.go:130] > # List of devices on the host that a
	I0830 21:46:30.193056  978470 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0830 21:46:30.193065  978470 command_runner.go:130] > # allowed_devices = [
	I0830 21:46:30.193071  978470 command_runner.go:130] > # 	"/dev/fuse",
	I0830 21:46:30.193080  978470 command_runner.go:130] > # ]
	I0830 21:46:30.193088  978470 command_runner.go:130] > # List of additional devices. specified as
	I0830 21:46:30.193103  978470 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0830 21:46:30.193114  978470 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0830 21:46:30.193137  978470 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 21:46:30.193148  978470 command_runner.go:130] > # additional_devices = [
	I0830 21:46:30.193153  978470 command_runner.go:130] > # ]
	I0830 21:46:30.193165  978470 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0830 21:46:30.193171  978470 command_runner.go:130] > # cdi_spec_dirs = [
	I0830 21:46:30.193180  978470 command_runner.go:130] > # 	"/etc/cdi",
	I0830 21:46:30.193187  978470 command_runner.go:130] > # 	"/var/run/cdi",
	I0830 21:46:30.193195  978470 command_runner.go:130] > # ]
	I0830 21:46:30.193206  978470 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0830 21:46:30.193216  978470 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0830 21:46:30.193220  978470 command_runner.go:130] > # Defaults to false.
	I0830 21:46:30.193249  978470 command_runner.go:130] > # device_ownership_from_security_context = false
	I0830 21:46:30.193258  978470 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0830 21:46:30.193265  978470 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0830 21:46:30.193271  978470 command_runner.go:130] > # hooks_dir = [
	I0830 21:46:30.193276  978470 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0830 21:46:30.193281  978470 command_runner.go:130] > # ]
	I0830 21:46:30.193287  978470 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0830 21:46:30.193295  978470 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0830 21:46:30.193303  978470 command_runner.go:130] > # its default mounts from the following two files:
	I0830 21:46:30.193307  978470 command_runner.go:130] > #
	I0830 21:46:30.193315  978470 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0830 21:46:30.193321  978470 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0830 21:46:30.193329  978470 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0830 21:46:30.193332  978470 command_runner.go:130] > #
	I0830 21:46:30.193339  978470 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0830 21:46:30.193348  978470 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0830 21:46:30.193357  978470 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0830 21:46:30.193361  978470 command_runner.go:130] > #      only add mounts it finds in this file.
	I0830 21:46:30.193368  978470 command_runner.go:130] > #
	I0830 21:46:30.193372  978470 command_runner.go:130] > # default_mounts_file = ""
	I0830 21:46:30.193378  978470 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0830 21:46:30.193384  978470 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0830 21:46:30.193389  978470 command_runner.go:130] > pids_limit = 1024
	I0830 21:46:30.193395  978470 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0830 21:46:30.193403  978470 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0830 21:46:30.193409  978470 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0830 21:46:30.193419  978470 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0830 21:46:30.193425  978470 command_runner.go:130] > # log_size_max = -1
	I0830 21:46:30.193431  978470 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0830 21:46:30.193438  978470 command_runner.go:130] > # log_to_journald = false
	I0830 21:46:30.193448  978470 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0830 21:46:30.193459  978470 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0830 21:46:30.193471  978470 command_runner.go:130] > # Path to directory for container attach sockets.
	I0830 21:46:30.193479  978470 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0830 21:46:30.193490  978470 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0830 21:46:30.193498  978470 command_runner.go:130] > # bind_mount_prefix = ""
	I0830 21:46:30.193504  978470 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0830 21:46:30.193509  978470 command_runner.go:130] > # read_only = false
	I0830 21:46:30.193515  978470 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0830 21:46:30.193524  978470 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0830 21:46:30.193528  978470 command_runner.go:130] > # live configuration reload.
	I0830 21:46:30.193535  978470 command_runner.go:130] > # log_level = "info"
	I0830 21:46:30.193541  978470 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0830 21:46:30.193549  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:46:30.193555  978470 command_runner.go:130] > # log_filter = ""
	I0830 21:46:30.193569  978470 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0830 21:46:30.193580  978470 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0830 21:46:30.193589  978470 command_runner.go:130] > # separated by comma.
	I0830 21:46:30.193595  978470 command_runner.go:130] > # uid_mappings = ""
	I0830 21:46:30.193609  978470 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0830 21:46:30.193618  978470 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0830 21:46:30.193625  978470 command_runner.go:130] > # separated by comma.
	I0830 21:46:30.193634  978470 command_runner.go:130] > # gid_mappings = ""
	I0830 21:46:30.193643  978470 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0830 21:46:30.193656  978470 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 21:46:30.193674  978470 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 21:46:30.193686  978470 command_runner.go:130] > # minimum_mappable_uid = -1
	I0830 21:46:30.193696  978470 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0830 21:46:30.193705  978470 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 21:46:30.193711  978470 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 21:46:30.193717  978470 command_runner.go:130] > # minimum_mappable_gid = -1
	I0830 21:46:30.193723  978470 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0830 21:46:30.193731  978470 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0830 21:46:30.193737  978470 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0830 21:46:30.193744  978470 command_runner.go:130] > # ctr_stop_timeout = 30
	I0830 21:46:30.193749  978470 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0830 21:46:30.193758  978470 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0830 21:46:30.193762  978470 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0830 21:46:30.193770  978470 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0830 21:46:30.193780  978470 command_runner.go:130] > drop_infra_ctr = false
	I0830 21:46:30.193790  978470 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0830 21:46:30.193804  978470 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0830 21:46:30.193819  978470 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0830 21:46:30.193830  978470 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0830 21:46:30.193840  978470 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0830 21:46:30.193878  978470 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0830 21:46:30.193890  978470 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0830 21:46:30.193903  978470 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0830 21:46:30.193914  978470 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0830 21:46:30.193927  978470 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0830 21:46:30.193940  978470 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0830 21:46:30.193950  978470 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0830 21:46:30.193955  978470 command_runner.go:130] > # default_runtime = "runc"
	I0830 21:46:30.193963  978470 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0830 21:46:30.193975  978470 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0830 21:46:30.193994  978470 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0830 21:46:30.194005  978470 command_runner.go:130] > # creation as a file is not desired either.
	I0830 21:46:30.194022  978470 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0830 21:46:30.194034  978470 command_runner.go:130] > # the hostname is being managed dynamically.
	I0830 21:46:30.194042  978470 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0830 21:46:30.194048  978470 command_runner.go:130] > # ]
	I0830 21:46:30.194057  978470 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0830 21:46:30.194071  978470 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0830 21:46:30.194086  978470 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0830 21:46:30.194099  978470 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0830 21:46:30.194108  978470 command_runner.go:130] > #
	I0830 21:46:30.194117  978470 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0830 21:46:30.194127  978470 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0830 21:46:30.194133  978470 command_runner.go:130] > #  runtime_type = "oci"
	I0830 21:46:30.194143  978470 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0830 21:46:30.194152  978470 command_runner.go:130] > #  privileged_without_host_devices = false
	I0830 21:46:30.194163  978470 command_runner.go:130] > #  allowed_annotations = []
	I0830 21:46:30.194169  978470 command_runner.go:130] > # Where:
	I0830 21:46:30.194179  978470 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0830 21:46:30.194193  978470 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0830 21:46:30.194206  978470 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0830 21:46:30.194215  978470 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0830 21:46:30.194221  978470 command_runner.go:130] > #   in $PATH.
	I0830 21:46:30.194234  978470 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0830 21:46:30.194246  978470 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0830 21:46:30.194260  978470 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0830 21:46:30.194270  978470 command_runner.go:130] > #   state.
	I0830 21:46:30.194281  978470 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0830 21:46:30.194295  978470 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0830 21:46:30.194303  978470 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0830 21:46:30.194312  978470 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0830 21:46:30.194326  978470 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0830 21:46:30.194341  978470 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0830 21:46:30.194352  978470 command_runner.go:130] > #   The currently recognized values are:
	I0830 21:46:30.194366  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0830 21:46:30.194381  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0830 21:46:30.194390  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0830 21:46:30.194402  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0830 21:46:30.194418  978470 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0830 21:46:30.194433  978470 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0830 21:46:30.194446  978470 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0830 21:46:30.194460  978470 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0830 21:46:30.194469  978470 command_runner.go:130] > #   should be moved to the container's cgroup
	I0830 21:46:30.194473  978470 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0830 21:46:30.194479  978470 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0830 21:46:30.194489  978470 command_runner.go:130] > runtime_type = "oci"
	I0830 21:46:30.194499  978470 command_runner.go:130] > runtime_root = "/run/runc"
	I0830 21:46:30.194509  978470 command_runner.go:130] > runtime_config_path = ""
	I0830 21:46:30.194519  978470 command_runner.go:130] > monitor_path = ""
	I0830 21:46:30.194529  978470 command_runner.go:130] > monitor_cgroup = ""
	I0830 21:46:30.194539  978470 command_runner.go:130] > monitor_exec_cgroup = ""
	I0830 21:46:30.194551  978470 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0830 21:46:30.194558  978470 command_runner.go:130] > # running containers
	I0830 21:46:30.194563  978470 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0830 21:46:30.194577  978470 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0830 21:46:30.194610  978470 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0830 21:46:30.194623  978470 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0830 21:46:30.194634  978470 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0830 21:46:30.194642  978470 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0830 21:46:30.194646  978470 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0830 21:46:30.194657  978470 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0830 21:46:30.194674  978470 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0830 21:46:30.194684  978470 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0830 21:46:30.194718  978470 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0830 21:46:30.194727  978470 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0830 21:46:30.194737  978470 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0830 21:46:30.194755  978470 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0830 21:46:30.194771  978470 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0830 21:46:30.194784  978470 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0830 21:46:30.194803  978470 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0830 21:46:30.194814  978470 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0830 21:46:30.194826  978470 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0830 21:46:30.194842  978470 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0830 21:46:30.194851  978470 command_runner.go:130] > # Example:
	I0830 21:46:30.194859  978470 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0830 21:46:30.194871  978470 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0830 21:46:30.194881  978470 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0830 21:46:30.194892  978470 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0830 21:46:30.194899  978470 command_runner.go:130] > # cpuset = 0
	I0830 21:46:30.194903  978470 command_runner.go:130] > # cpushares = "0-1"
	I0830 21:46:30.194912  978470 command_runner.go:130] > # Where:
	I0830 21:46:30.194924  978470 command_runner.go:130] > # The workload name is workload-type.
	I0830 21:46:30.194936  978470 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0830 21:46:30.194948  978470 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0830 21:46:30.194962  978470 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0830 21:46:30.194977  978470 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0830 21:46:30.194985  978470 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0830 21:46:30.194993  978470 command_runner.go:130] > # 
	I0830 21:46:30.195005  978470 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0830 21:46:30.195013  978470 command_runner.go:130] > #
	I0830 21:46:30.195023  978470 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0830 21:46:30.195037  978470 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0830 21:46:30.195050  978470 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0830 21:46:30.195063  978470 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0830 21:46:30.195071  978470 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0830 21:46:30.195080  978470 command_runner.go:130] > [crio.image]
	I0830 21:46:30.195094  978470 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0830 21:46:30.195104  978470 command_runner.go:130] > # default_transport = "docker://"
	I0830 21:46:30.195115  978470 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0830 21:46:30.195128  978470 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0830 21:46:30.195138  978470 command_runner.go:130] > # global_auth_file = ""
	I0830 21:46:30.195148  978470 command_runner.go:130] > # The image used to instantiate infra containers.
	I0830 21:46:30.195157  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:46:30.195168  978470 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0830 21:46:30.195183  978470 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0830 21:46:30.195196  978470 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0830 21:46:30.195207  978470 command_runner.go:130] > # This option supports live configuration reload.
	I0830 21:46:30.195217  978470 command_runner.go:130] > # pause_image_auth_file = ""
	I0830 21:46:30.195229  978470 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0830 21:46:30.195239  978470 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0830 21:46:30.195251  978470 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0830 21:46:30.195264  978470 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0830 21:46:30.195275  978470 command_runner.go:130] > # pause_command = "/pause"
	I0830 21:46:30.195288  978470 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0830 21:46:30.195302  978470 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0830 21:46:30.195315  978470 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0830 21:46:30.195324  978470 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0830 21:46:30.195336  978470 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0830 21:46:30.195346  978470 command_runner.go:130] > # signature_policy = ""
	I0830 21:46:30.195360  978470 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0830 21:46:30.195374  978470 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0830 21:46:30.195383  978470 command_runner.go:130] > # changing them here.
	I0830 21:46:30.195393  978470 command_runner.go:130] > # insecure_registries = [
	I0830 21:46:30.195401  978470 command_runner.go:130] > # ]
	I0830 21:46:30.195407  978470 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0830 21:46:30.195417  978470 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0830 21:46:30.195427  978470 command_runner.go:130] > # image_volumes = "mkdir"
	I0830 21:46:30.195439  978470 command_runner.go:130] > # Temporary directory to use for storing big files
	I0830 21:46:30.195450  978470 command_runner.go:130] > # big_files_temporary_dir = ""
	I0830 21:46:30.195463  978470 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0830 21:46:30.195472  978470 command_runner.go:130] > # CNI plugins.
	I0830 21:46:30.195481  978470 command_runner.go:130] > [crio.network]
	I0830 21:46:30.195510  978470 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0830 21:46:30.195524  978470 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0830 21:46:30.195535  978470 command_runner.go:130] > # cni_default_network = ""
	I0830 21:46:30.195551  978470 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0830 21:46:30.195562  978470 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0830 21:46:30.195573  978470 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0830 21:46:30.195593  978470 command_runner.go:130] > # plugin_dirs = [
	I0830 21:46:30.195604  978470 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0830 21:46:30.195613  978470 command_runner.go:130] > # ]
	I0830 21:46:30.195623  978470 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0830 21:46:30.195632  978470 command_runner.go:130] > [crio.metrics]
	I0830 21:46:30.195640  978470 command_runner.go:130] > # Globally enable or disable metrics support.
	I0830 21:46:30.195650  978470 command_runner.go:130] > enable_metrics = true
	I0830 21:46:30.195658  978470 command_runner.go:130] > # Specify enabled metrics collectors.
	I0830 21:46:30.195674  978470 command_runner.go:130] > # Per default all metrics are enabled.
	I0830 21:46:30.195688  978470 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0830 21:46:30.195702  978470 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0830 21:46:30.195716  978470 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0830 21:46:30.195726  978470 command_runner.go:130] > # metrics_collectors = [
	I0830 21:46:30.195735  978470 command_runner.go:130] > # 	"operations",
	I0830 21:46:30.195746  978470 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0830 21:46:30.195756  978470 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0830 21:46:30.195764  978470 command_runner.go:130] > # 	"operations_errors",
	I0830 21:46:30.195785  978470 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0830 21:46:30.195793  978470 command_runner.go:130] > # 	"image_pulls_by_name",
	I0830 21:46:30.195800  978470 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0830 21:46:30.195807  978470 command_runner.go:130] > # 	"image_pulls_failures",
	I0830 21:46:30.195814  978470 command_runner.go:130] > # 	"image_pulls_successes",
	I0830 21:46:30.195823  978470 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0830 21:46:30.195829  978470 command_runner.go:130] > # 	"image_layer_reuse",
	I0830 21:46:30.195838  978470 command_runner.go:130] > # 	"containers_oom_total",
	I0830 21:46:30.195845  978470 command_runner.go:130] > # 	"containers_oom",
	I0830 21:46:30.195854  978470 command_runner.go:130] > # 	"processes_defunct",
	I0830 21:46:30.195861  978470 command_runner.go:130] > # 	"operations_total",
	I0830 21:46:30.195871  978470 command_runner.go:130] > # 	"operations_latency_seconds",
	I0830 21:46:30.195881  978470 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0830 21:46:30.195891  978470 command_runner.go:130] > # 	"operations_errors_total",
	I0830 21:46:30.195899  978470 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0830 21:46:30.195909  978470 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0830 21:46:30.195918  978470 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0830 21:46:30.195928  978470 command_runner.go:130] > # 	"image_pulls_success_total",
	I0830 21:46:30.195937  978470 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0830 21:46:30.195948  978470 command_runner.go:130] > # 	"containers_oom_count_total",
	I0830 21:46:30.195956  978470 command_runner.go:130] > # ]
	I0830 21:46:30.195965  978470 command_runner.go:130] > # The port on which the metrics server will listen.
	I0830 21:46:30.195973  978470 command_runner.go:130] > # metrics_port = 9090
	I0830 21:46:30.195981  978470 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0830 21:46:30.195990  978470 command_runner.go:130] > # metrics_socket = ""
	I0830 21:46:30.195999  978470 command_runner.go:130] > # The certificate for the secure metrics server.
	I0830 21:46:30.196011  978470 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0830 21:46:30.196024  978470 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0830 21:46:30.196035  978470 command_runner.go:130] > # certificate on any modification event.
	I0830 21:46:30.196044  978470 command_runner.go:130] > # metrics_cert = ""
	I0830 21:46:30.196054  978470 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0830 21:46:30.196068  978470 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0830 21:46:30.196077  978470 command_runner.go:130] > # metrics_key = ""
	I0830 21:46:30.196087  978470 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0830 21:46:30.196096  978470 command_runner.go:130] > [crio.tracing]
	I0830 21:46:30.196103  978470 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0830 21:46:30.196110  978470 command_runner.go:130] > # enable_tracing = false
	I0830 21:46:30.196115  978470 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0830 21:46:30.196122  978470 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0830 21:46:30.196127  978470 command_runner.go:130] > # Number of samples to collect per million spans.
	I0830 21:46:30.196133  978470 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0830 21:46:30.196139  978470 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0830 21:46:30.196145  978470 command_runner.go:130] > [crio.stats]
	I0830 21:46:30.196150  978470 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0830 21:46:30.196156  978470 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0830 21:46:30.196160  978470 command_runner.go:130] > # stats_collection_period = 0
	I0830 21:46:30.196203  978470 command_runner.go:130] ! time="2023-08-30 21:46:30.180667446Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0830 21:46:30.196229  978470 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0830 21:46:30.196295  978470 cni.go:84] Creating CNI manager for ""
	I0830 21:46:30.196304  978470 cni.go:136] 3 nodes found, recommending kindnet
	I0830 21:46:30.196314  978470 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:46:30.196335  978470 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.30 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-752665 NodeName:multinode-752665-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 21:46:30.196468  978470 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-752665-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:46:30.196518  978470 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-752665-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 21:46:30.196571  978470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 21:46:30.206258  978470 command_runner.go:130] > kubeadm
	I0830 21:46:30.206280  978470 command_runner.go:130] > kubectl
	I0830 21:46:30.206286  978470 command_runner.go:130] > kubelet
	I0830 21:46:30.206313  978470 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 21:46:30.206364  978470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0830 21:46:30.215438  978470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0830 21:46:30.231845  978470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 21:46:30.248152  978470 ssh_runner.go:195] Run: grep 192.168.39.20	control-plane.minikube.internal$ /etc/hosts
	I0830 21:46:30.252434  978470 command_runner.go:130] > 192.168.39.20	control-plane.minikube.internal
	I0830 21:46:30.252621  978470 host.go:66] Checking if "multinode-752665" exists ...
	I0830 21:46:30.252960  978470 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:46:30.253094  978470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:46:30.253142  978470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:46:30.269048  978470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0830 21:46:30.269523  978470 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:46:30.270049  978470 main.go:141] libmachine: Using API Version  1
	I0830 21:46:30.270068  978470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:46:30.270425  978470 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:46:30.270595  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:46:30.270735  978470 start.go:301] JoinCluster: &{Name:multinode-752665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-752665 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.46 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.30 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:46:30.270868  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0830 21:46:30.270886  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:46:30.273787  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:46:30.274199  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:46:30.274237  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:46:30.274340  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:46:30.274534  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:46:30.274693  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:46:30.274832  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:46:30.466419  978470 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0pctsi.imhil95uv3t3zlod --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 21:46:30.466770  978470 start.go:314] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.30 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0830 21:46:30.466828  978470 host.go:66] Checking if "multinode-752665" exists ...
	I0830 21:46:30.467276  978470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:46:30.467336  978470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:46:30.483382  978470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0830 21:46:30.483901  978470 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:46:30.484467  978470 main.go:141] libmachine: Using API Version  1
	I0830 21:46:30.484490  978470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:46:30.484823  978470 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:46:30.485038  978470 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:46:30.485258  978470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-752665-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0830 21:46:30.485291  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:46:30.488666  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:46:30.489197  978470 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:42:23 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:46:30.489226  978470 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:46:30.489424  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:46:30.489582  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:46:30.489741  978470 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:46:30.489900  978470 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:46:30.692976  978470 command_runner.go:130] > node/multinode-752665-m03 cordoned
	I0830 21:46:33.735985  978470 command_runner.go:130] > pod "busybox-5bc68d56bd-f5rjq" has DeletionTimestamp older than 1 seconds, skipping
	I0830 21:46:33.736009  978470 command_runner.go:130] > node/multinode-752665-m03 drained
	I0830 21:46:33.737973  978470 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0830 21:46:33.737991  978470 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-d4xrz, kube-system/kube-proxy-jwftn
	I0830 21:46:33.738013  978470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-752665-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.252728058s)
	I0830 21:46:33.738027  978470 node.go:108] successfully drained node "m03"
	I0830 21:46:33.738367  978470 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:46:33.738581  978470 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:46:33.738978  978470 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0830 21:46:33.739039  978470 round_trippers.go:463] DELETE https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m03
	I0830 21:46:33.739051  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:33.739059  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:33.739067  978470 round_trippers.go:473]     Content-Type: application/json
	I0830 21:46:33.739075  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:33.751816  978470 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0830 21:46:33.751835  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:33.751841  978470 round_trippers.go:580]     Audit-Id: 9077f2ea-f138-4fea-9b3a-aac9b76e132d
	I0830 21:46:33.751847  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:33.751853  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:33.751858  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:33.751863  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:33.751868  978470 round_trippers.go:580]     Content-Length: 171
	I0830 21:46:33.751874  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:33 GMT
	I0830 21:46:33.752349  978470 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-752665-m03","kind":"nodes","uid":"2c7759fc-7c08-4ea2-b0c4-b56d98a23e6f"}}
	I0830 21:46:33.752403  978470 node.go:124] successfully deleted node "m03"
	I0830 21:46:33.752419  978470 start.go:318] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.30 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0830 21:46:33.752448  978470 start.go:322] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.30 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0830 21:46:33.752477  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0pctsi.imhil95uv3t3zlod --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-752665-m03"
	I0830 21:46:33.834758  978470 command_runner.go:130] ! W0830 21:46:33.826718    2533 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0830 21:46:33.835016  978470 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0830 21:46:33.955608  978470 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0830 21:46:33.955651  978470 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0830 21:46:34.704450  978470 command_runner.go:130] > [preflight] Running pre-flight checks
	I0830 21:46:34.704486  978470 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0830 21:46:34.704501  978470 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0830 21:46:34.704515  978470 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 21:46:34.704526  978470 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 21:46:34.704534  978470 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0830 21:46:34.704545  978470 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0830 21:46:34.704553  978470 command_runner.go:130] > This node has joined the cluster:
	I0830 21:46:34.704562  978470 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0830 21:46:34.704572  978470 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0830 21:46:34.704583  978470 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0830 21:46:34.704615  978470 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0830 21:46:34.984592  978470 start.go:303] JoinCluster complete in 4.713851287s
	I0830 21:46:34.984621  978470 cni.go:84] Creating CNI manager for ""
	I0830 21:46:34.984629  978470 cni.go:136] 3 nodes found, recommending kindnet
	I0830 21:46:34.984688  978470 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 21:46:34.990933  978470 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0830 21:46:34.990966  978470 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0830 21:46:34.990976  978470 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0830 21:46:34.990987  978470 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 21:46:34.990996  978470 command_runner.go:130] > Access: 2023-08-30 21:42:23.592476286 +0000
	I0830 21:46:34.991012  978470 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0830 21:46:34.991023  978470 command_runner.go:130] > Change: 2023-08-30 21:42:21.726476286 +0000
	I0830 21:46:34.991030  978470 command_runner.go:130] >  Birth: -
	I0830 21:46:34.991104  978470 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 21:46:34.991117  978470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 21:46:35.016320  978470 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 21:46:35.383137  978470 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0830 21:46:35.389095  978470 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0830 21:46:35.395124  978470 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0830 21:46:35.405564  978470 command_runner.go:130] > daemonset.apps/kindnet configured
	I0830 21:46:35.408669  978470 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:46:35.409012  978470 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:46:35.409435  978470 round_trippers.go:463] GET https://192.168.39.20:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 21:46:35.409453  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.409464  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.409474  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.411613  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:46:35.411630  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.411640  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.411652  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.411666  978470 round_trippers.go:580]     Content-Length: 291
	I0830 21:46:35.411678  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.411687  978470 round_trippers.go:580]     Audit-Id: 5f28ec82-01e2-4662-a3b4-d08bfa26d952
	I0830 21:46:35.411700  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.411713  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.411757  978470 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4cda7228-5995-4a40-902e-7c8e87f8c72e","resourceVersion":"858","creationTimestamp":"2023-08-30T21:32:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0830 21:46:35.411874  978470 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-752665" context rescaled to 1 replicas
	I0830 21:46:35.411911  978470 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.30 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0830 21:46:35.414819  978470 out.go:177] * Verifying Kubernetes components...
	I0830 21:46:35.416168  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:46:35.429967  978470 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:46:35.430258  978470 kapi.go:59] client config for multinode-752665: &rest.Config{Host:"https://192.168.39.20:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/multinode-752665/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:46:35.430568  978470 node_ready.go:35] waiting up to 6m0s for node "multinode-752665-m03" to be "Ready" ...
	I0830 21:46:35.430655  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m03
	I0830 21:46:35.430666  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.430678  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.430688  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.433467  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:46:35.433488  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.433498  978470 round_trippers.go:580]     Audit-Id: 9ee47793-ed91-444f-81d3-00275a8b8448
	I0830 21:46:35.433508  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.433515  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.433525  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.433535  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.433544  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.433731  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m03","uid":"ac6aef7a-1daf-463c-b51c-c44be839370f","resourceVersion":"1203","creationTimestamp":"2023-08-30T21:46:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:46:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:46:34Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0830 21:46:35.434071  978470 node_ready.go:49] node "multinode-752665-m03" has status "Ready":"True"
	I0830 21:46:35.434093  978470 node_ready.go:38] duration metric: took 3.502021ms waiting for node "multinode-752665-m03" to be "Ready" ...
	I0830 21:46:35.434102  978470 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:46:35.434171  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods
	I0830 21:46:35.434181  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.434191  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.434202  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.438763  978470 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:46:35.438784  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.438794  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.438802  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.438810  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.438819  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.438828  978470 round_trippers.go:580]     Audit-Id: 81f39000-64be-4af5-bec3-1b91095b6563
	I0830 21:46:35.438839  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.439936  978470 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1209"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"854","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81878 chars]
	I0830 21:46:35.443440  978470 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:35.443516  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zcppg
	I0830 21:46:35.443527  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.443538  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.443547  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.445780  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:46:35.445795  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.445801  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.445806  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.445812  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.445817  978470 round_trippers.go:580]     Audit-Id: e3142543-6554-404a-8079-12ea69e63405
	I0830 21:46:35.445822  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.445827  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.446285  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zcppg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"4742270b-6c64-411b-bfb6-8c53211aa106","resourceVersion":"854","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6254f4ad-15aa-4101-b650-ff9500018996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6254f4ad-15aa-4101-b650-ff9500018996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0830 21:46:35.446784  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:46:35.446799  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.446809  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.446817  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.448740  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:46:35.448754  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.448760  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.448766  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.448771  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.448776  978470 round_trippers.go:580]     Audit-Id: 81009e23-c19a-4d5e-b371-17c5c071dc97
	I0830 21:46:35.448781  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.448788  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.449065  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:46:35.449327  978470 pod_ready.go:92] pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace has status "Ready":"True"
	I0830 21:46:35.449341  978470 pod_ready.go:81] duration metric: took 5.879547ms waiting for pod "coredns-5dd5756b68-zcppg" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:35.449349  978470 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:35.449386  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-752665
	I0830 21:46:35.449393  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.449400  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.449406  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.451242  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:46:35.451257  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.451266  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.451274  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.451282  978470 round_trippers.go:580]     Audit-Id: 29dd7664-593b-49d1-8fd4-1b5942da80ca
	I0830 21:46:35.451290  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.451295  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.451301  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.451468  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-752665","namespace":"kube-system","uid":"25e2609d-f391-4e71-823a-c4fe8625092d","resourceVersion":"830","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.20:2379","kubernetes.io/config.hash":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.mirror":"3d44ed339e19dd41d07034008e5b52b3","kubernetes.io/config.seen":"2023-08-30T21:32:35.235892298Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0830 21:46:35.451786  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:46:35.451798  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.451809  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.451815  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.453610  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:46:35.453625  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.453631  978470 round_trippers.go:580]     Audit-Id: 9d374a3c-0119-4f03-8783-6b4d1b427aaf
	I0830 21:46:35.453636  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.453641  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.453649  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.453658  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.453666  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.453975  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:46:35.454221  978470 pod_ready.go:92] pod "etcd-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:46:35.454231  978470 pod_ready.go:81] duration metric: took 4.878252ms waiting for pod "etcd-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:35.454245  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:35.454285  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-752665
	I0830 21:46:35.454292  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.454298  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.454304  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.456100  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:46:35.456118  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.456127  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.456136  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.456150  978470 round_trippers.go:580]     Audit-Id: 1c9c37bd-4ce1-4132-af0e-e006464c5fb9
	I0830 21:46:35.456158  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.456167  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.456178  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.456362  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-752665","namespace":"kube-system","uid":"d813d11d-d0ec-4091-a72b-187bd44eabe3","resourceVersion":"844","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.20:8443","kubernetes.io/config.hash":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.mirror":"063d73d4de1cf2feb4ba920354d72513","kubernetes.io/config.seen":"2023-08-30T21:32:26.214498990Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0830 21:46:35.456675  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:46:35.456684  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.456691  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.456697  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.458577  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:46:35.458594  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.458604  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.458613  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.458622  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.458629  978470 round_trippers.go:580]     Audit-Id: d8dcdfa7-8e14-44e8-8b3c-06e5d9949d99
	I0830 21:46:35.458639  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.458652  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.459043  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:46:35.459282  978470 pod_ready.go:92] pod "kube-apiserver-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:46:35.459292  978470 pod_ready.go:81] duration metric: took 5.041869ms waiting for pod "kube-apiserver-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:35.459299  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:35.459336  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-752665
	I0830 21:46:35.459343  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.459350  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.459356  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.461150  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:46:35.461165  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.461171  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.461176  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.461184  978470 round_trippers.go:580]     Audit-Id: 02a58fc6-ed20-47f9-aa34-7556bff1715f
	I0830 21:46:35.461193  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.461208  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.461219  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.461404  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-752665","namespace":"kube-system","uid":"0391b35f-5177-412c-b7d4-073efb2de36b","resourceVersion":"846","creationTimestamp":"2023-08-30T21:32:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.mirror":"c398e6beaac5b42fe6a53cb0b1863506","kubernetes.io/config.seen":"2023-08-30T21:32:26.214500244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0830 21:46:35.461863  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:46:35.461881  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.461892  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.461904  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.463659  978470 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 21:46:35.463674  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.463683  978470 round_trippers.go:580]     Audit-Id: adfd3904-582a-4154-9555-a1c707d7fc2a
	I0830 21:46:35.463692  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.463701  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.463710  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.463720  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.463734  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.464021  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:46:35.464381  978470 pod_ready.go:92] pod "kube-controller-manager-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:46:35.464397  978470 pod_ready.go:81] duration metric: took 5.09119ms waiting for pod "kube-controller-manager-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:35.464408  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5twl5" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:35.630794  978470 request.go:629] Waited for 166.311285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:46:35.630895  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5twl5
	I0830 21:46:35.630908  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.630919  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.630932  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.633782  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:46:35.633802  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.633808  978470 round_trippers.go:580]     Audit-Id: 082e1444-e4b6-46c8-ac06-6c636cf445db
	I0830 21:46:35.633817  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.633826  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.633839  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.633851  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.633860  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.634072  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5twl5","generateName":"kube-proxy-","namespace":"kube-system","uid":"ff4250a4-1482-42c0-a523-e97faf806c43","resourceVersion":"1021","creationTimestamp":"2023-08-30T21:33:32Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:33:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0830 21:46:35.830866  978470 request.go:629] Waited for 196.314433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:46:35.830945  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m02
	I0830 21:46:35.830950  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:35.830958  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:35.830967  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:35.833835  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:46:35.833853  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:35.833860  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:35.833865  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:35.833871  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:35.833880  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:35.833889  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:35 GMT
	I0830 21:46:35.833898  978470 round_trippers.go:580]     Audit-Id: 8b6f66fb-7177-4caa-af33-7ef4d665cd21
	I0830 21:46:35.834060  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m02","uid":"ff65bdbe-779a-4252-a23d-cbb7efdf27f9","resourceVersion":"1035","creationTimestamp":"2023-08-30T21:44:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:44:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:44:51Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0830 21:46:35.834317  978470 pod_ready.go:92] pod "kube-proxy-5twl5" in "kube-system" namespace has status "Ready":"True"
	I0830 21:46:35.834330  978470 pod_ready.go:81] duration metric: took 369.910254ms waiting for pod "kube-proxy-5twl5" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:35.834340  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jwftn" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:36.030791  978470 request.go:629] Waited for 196.36996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwftn
	I0830 21:46:36.030863  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwftn
	I0830 21:46:36.030868  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:36.030875  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:36.030881  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:36.034128  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:46:36.034147  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:36.034157  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:36.034167  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:36.034183  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:36.034191  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:36.034199  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:36 GMT
	I0830 21:46:36.034206  978470 round_trippers.go:580]     Audit-Id: a526433c-ab4f-4e69-8918-68bdb355ce96
	I0830 21:46:36.034534  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jwftn","generateName":"kube-proxy-","namespace":"kube-system","uid":"bfc888c8-7790-4267-a1fc-cab9448e097b","resourceVersion":"1172","creationTimestamp":"2023-08-30T21:34:21Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:34:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0830 21:46:36.231364  978470 request.go:629] Waited for 196.370278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m03
	I0830 21:46:36.231446  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665-m03
	I0830 21:46:36.231451  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:36.231464  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:36.231470  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:36.234261  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:46:36.234282  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:36.234291  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:36.234301  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:36.234309  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:36.234317  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:36.234328  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:36 GMT
	I0830 21:46:36.234341  978470 round_trippers.go:580]     Audit-Id: 38d756be-49ba-46a6-90ef-8c55d7fc1553
	I0830 21:46:36.234578  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665-m03","uid":"ac6aef7a-1daf-463c-b51c-c44be839370f","resourceVersion":"1203","creationTimestamp":"2023-08-30T21:46:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:46:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:46:34Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0830 21:46:36.234848  978470 pod_ready.go:92] pod "kube-proxy-jwftn" in "kube-system" namespace has status "Ready":"True"
	I0830 21:46:36.234863  978470 pod_ready.go:81] duration metric: took 400.514495ms waiting for pod "kube-proxy-jwftn" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:36.234872  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:36.431352  978470 request.go:629] Waited for 196.392155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:46:36.431422  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vltx5
	I0830 21:46:36.431426  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:36.431434  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:36.431443  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:36.434068  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:46:36.434084  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:36.434091  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:36.434097  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:36.434103  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:36.434111  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:36 GMT
	I0830 21:46:36.434117  978470 round_trippers.go:580]     Audit-Id: ce33626e-31d5-4868-ad3f-ad9b5908141f
	I0830 21:46:36.434125  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:36.434359  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vltx5","generateName":"kube-proxy-","namespace":"kube-system","uid":"24ee271e-5778-4d0c-ab2c-77426f2673b3","resourceVersion":"752","creationTimestamp":"2023-08-30T21:32:47Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65362ebb-6395-42f6-b1ef-371866fe068e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65362ebb-6395-42f6-b1ef-371866fe068e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0830 21:46:36.631126  978470 request.go:629] Waited for 196.345666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:46:36.631206  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:46:36.631211  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:36.631223  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:36.631241  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:36.634302  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:46:36.634325  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:36.634333  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:36 GMT
	I0830 21:46:36.634343  978470 round_trippers.go:580]     Audit-Id: 27464dd4-c0bd-4409-baa9-6a20ff8c8777
	I0830 21:46:36.634354  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:36.634362  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:36.634370  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:36.634377  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:36.634518  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:46:36.634850  978470 pod_ready.go:92] pod "kube-proxy-vltx5" in "kube-system" namespace has status "Ready":"True"
	I0830 21:46:36.634866  978470 pod_ready.go:81] duration metric: took 399.986168ms waiting for pod "kube-proxy-vltx5" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:36.634874  978470 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:36.830769  978470 request.go:629] Waited for 195.808585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:46:36.830837  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-752665
	I0830 21:46:36.830842  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:36.830849  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:36.830856  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:36.834134  978470 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 21:46:36.834152  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:36.834159  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:36.834164  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:36.834170  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:36.834176  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:36.834181  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:36 GMT
	I0830 21:46:36.834186  978470 round_trippers.go:580]     Audit-Id: 6a01d465-bd03-4497-9834-bbeb30f7cbb9
	I0830 21:46:36.834350  978470 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-752665","namespace":"kube-system","uid":"4c8a6a98-51b6-4010-9519-add75ab1a7a9","resourceVersion":"842","creationTimestamp":"2023-08-30T21:32:35Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.mirror":"2957dd3360cebd27e85f1db4b73fa253","kubernetes.io/config.seen":"2023-08-30T21:32:35.235897289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T21:32:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0830 21:46:37.031108  978470 request.go:629] Waited for 196.368841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:46:37.031167  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes/multinode-752665
	I0830 21:46:37.031172  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:37.031180  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:37.031186  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:37.035223  978470 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 21:46:37.035243  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:37.035250  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:37.035257  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:37.035266  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:37 GMT
	I0830 21:46:37.035278  978470 round_trippers.go:580]     Audit-Id: 2c11de8f-c186-4482-97fe-a3c208ebefaa
	I0830 21:46:37.035287  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:37.035299  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:37.035709  978470 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T21:32:31Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0830 21:46:37.036070  978470 pod_ready.go:92] pod "kube-scheduler-multinode-752665" in "kube-system" namespace has status "Ready":"True"
	I0830 21:46:37.036087  978470 pod_ready.go:81] duration metric: took 401.207071ms waiting for pod "kube-scheduler-multinode-752665" in "kube-system" namespace to be "Ready" ...
	I0830 21:46:37.036099  978470 pod_ready.go:38] duration metric: took 1.601984166s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:46:37.036122  978470 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 21:46:37.036175  978470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:46:37.050479  978470 system_svc.go:56] duration metric: took 14.348756ms WaitForService to wait for kubelet.
	I0830 21:46:37.050505  978470 kubeadm.go:581] duration metric: took 1.638565027s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 21:46:37.050526  978470 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:46:37.230816  978470 request.go:629] Waited for 180.212378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.20:8443/api/v1/nodes
	I0830 21:46:37.230891  978470 round_trippers.go:463] GET https://192.168.39.20:8443/api/v1/nodes
	I0830 21:46:37.230895  978470 round_trippers.go:469] Request Headers:
	I0830 21:46:37.230904  978470 round_trippers.go:473]     Accept: application/json, */*
	I0830 21:46:37.230910  978470 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0830 21:46:37.233808  978470 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 21:46:37.233835  978470 round_trippers.go:577] Response Headers:
	I0830 21:46:37.233846  978470 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8d09e2de-5170-4de5-b986-149b59ae03e9
	I0830 21:46:37.233856  978470 round_trippers.go:580]     Date: Wed, 30 Aug 2023 21:46:37 GMT
	I0830 21:46:37.233865  978470 round_trippers.go:580]     Audit-Id: 316752a5-e443-4ef1-81fd-6d170e13f20a
	I0830 21:46:37.233880  978470 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 21:46:37.233891  978470 round_trippers.go:580]     Content-Type: application/json
	I0830 21:46:37.233899  978470 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e996313f-cd83-4b22-bc4e-c5d82e9fbb43
	I0830 21:46:37.234250  978470 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1217"},"items":[{"metadata":{"name":"multinode-752665","uid":"330cc3c6-f70d-424e-8c76-544cbd763a37","resourceVersion":"873","creationTimestamp":"2023-08-30T21:32:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-752665","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dcfed3f069eb419c2ffae8f904d3fba5b9405fc5","minikube.k8s.io/name":"multinode-752665","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T21_32_36_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15132 chars]
	I0830 21:46:37.234838  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:46:37.234858  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:46:37.234869  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:46:37.234872  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:46:37.234876  978470 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 21:46:37.234879  978470 node_conditions.go:123] node cpu capacity is 2
	I0830 21:46:37.234883  978470 node_conditions.go:105] duration metric: took 184.352857ms to run NodePressure ...
	I0830 21:46:37.234896  978470 start.go:228] waiting for startup goroutines ...
	I0830 21:46:37.234914  978470 start.go:242] writing updated cluster config ...
	I0830 21:46:37.235241  978470 ssh_runner.go:195] Run: rm -f paused
	I0830 21:46:37.286751  978470 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 21:46:37.289747  978470 out.go:177] * Done! kubectl is now configured to use "multinode-752665" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 21:42:22 UTC, ends at Wed 2023-08-30 21:46:38 UTC. --
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.308918600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f924e45b-363d-4edc-9ff5-a3fa98bad1b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.309009536Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f924e45b-363d-4edc-9ff5-a3fa98bad1b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.309257760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b4f12e1f60a8b578c5b0c4f72357e2847f59a9b954f85641ed57777fae2f0e7,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431806514239799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da629bd30c95063dfbbcb0aa6f9b60384b6991ca429c640b40faa57dec40c50,PodSandboxId:64388704da8e3553b4b7ba212150b8be8d2753b98093d1a05cff068ba60739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431792141859158,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b097d6fbe7bbb1643d909ef729007341ae7fc0ab1ec80207195fd5969955584,PodSandboxId:7e329baa51fcb84e5627e3f4b3a56c4c55cfbee4c10e2f831d562c3c1bb91e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431790767211760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a2043746af8af2e838c63550be9237045e88d206eb4789240487e7c28d4ee5,PodSandboxId:f14557114180d6c8c6216c427519e972cd04d7c1ccde151558efaa6fd672d534,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431777589121684,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73ce9a76a5bbd0d3e23368c634b78d327431b0efbea79d411db2d27e7d123e75,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1693431775357908474,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4e1b35f17d2a1b81fa2d54c757a4b708b4dd37cc7488cc46d693473fcc9bb2,PodSandboxId:d7f7a206e6a5c23a188e65f23d2f395a14e9b7bfe57ef487c8006f7e1929c875,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431775227828638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f26
73b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a033698b7a59dfc49b808574e8dc26210f3ed2c47e3087a9aecc85370aebd4d,PodSandboxId:131a16a99e528bc1acdbaef31394df023ebb2890687d0b17bae7e4303cda43ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431769964651355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annota
tions:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c363b6d55bfb292ea99406be9e755bf39bd27ebf8477ddc9718c3ce73c120db,PodSandboxId:154615dd125abb01ce8020f111e0fa6d9c13f2a91bbfd12b9f0da953a568aa9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431769491957125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe
6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39448b46e3c640ebbdcaae3b540696d2c0b6c2262e2b1675d8fedda15d463d1,PodSandboxId:1259613dda6af4f7145c72e09d7ad3cdd2a234a8aa1245731d283c1296d5d939,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431769460262754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72
513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001d412c8a1036be120207d7090f1950f08a2a012b11b430c1889b38d0c4edb7,PodSandboxId:96d321001a67e507187f6df1e748580b72e5866d3ab475e44fd3aa5fa7fd9592,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431769261739993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 831a3116,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f924e45b-363d-4edc-9ff5-a3fa98bad1b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.315066779Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3d518e6b-2cdb-42fb-a36e-90673f44a72d name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.315404625Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:64388704da8e3553b4b7ba212150b8be8d2753b98093d1a05cff068ba60739d9,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-mzmpx,Uid:1fd37765-b8e2-4e0c-8e64-71d975f27bf8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431790342836159,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T21:42:54.281564481Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e329baa51fcb84e5627e3f4b3a56c4c55cfbee4c10e2f831d562c3c1bb91e8b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-zcppg,Uid:4742270b-6c64-411b-bfb6-8c53211aa106,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1693431790144336603,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T21:42:54.281568957Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:67db5a8a-290a-40a7-b42e-212d99db812a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431774649910983,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]st
ring{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-08-30T21:42:54.281563281Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d7f7a206e6a5c23a188e65f23d2f395a14e9b7bfe57ef487c8006f7e1929c875,Metadata:&PodSandboxMetadata{Name:kube-proxy-vltx5,Uid:24ee271e-5778-4d0c-ab2c-77426f2673b3,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1693431774639105845,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f2673b3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T21:42:54.281561839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f14557114180d6c8c6216c427519e972cd04d7c1ccde151558efaa6fd672d534,Metadata:&PodSandboxMetadata{Name:kindnet-x5kk4,Uid:2fdd77f6-856a-4400-b881-210549c588e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431774610088866,Labels:map[string]string{app: kindnet,controller-revision-hash: 77b9cf4878,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fdd77f6-856a-4400-b881-210549c588e2,k8s-app: kindnet,pod-template-gener
ation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T21:42:54.281565525Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:131a16a99e528bc1acdbaef31394df023ebb2890687d0b17bae7e4303cda43ab,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-752665,Uid:2957dd3360cebd27e85f1db4b73fa253,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431768831776970,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2957dd3360cebd27e85f1db4b73fa253,kubernetes.io/config.seen: 2023-08-30T21:42:48.277720328Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1259613dda6af4f7145c72e09d7ad3cdd2a234a8aa1245731d283c1296d5d939,Metadata:&PodSandboxMetadata{Name:kube-apiserver-mul
tinode-752665,Uid:063d73d4de1cf2feb4ba920354d72513,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431768815776990,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72513,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.20:8443,kubernetes.io/config.hash: 063d73d4de1cf2feb4ba920354d72513,kubernetes.io/config.seen: 2023-08-30T21:42:48.277718558Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:154615dd125abb01ce8020f111e0fa6d9c13f2a91bbfd12b9f0da953a568aa9b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-752665,Uid:c398e6beaac5b42fe6a53cb0b1863506,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431768797122684,Labels:map[string]string{component: kube-controller-manager,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe6a53cb0b1863506,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c398e6beaac5b42fe6a53cb0b1863506,kubernetes.io/config.seen: 2023-08-30T21:42:48.277719575Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:96d321001a67e507187f6df1e748580b72e5866d3ab475e44fd3aa5fa7fd9592,Metadata:&PodSandboxMetadata{Name:etcd-multinode-752665,Uid:3d44ed339e19dd41d07034008e5b52b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693431768755936173,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.20:2379,kuberne
tes.io/config.hash: 3d44ed339e19dd41d07034008e5b52b3,kubernetes.io/config.seen: 2023-08-30T21:42:48.277715115Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=3d518e6b-2cdb-42fb-a36e-90673f44a72d name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.316137572Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e38037f2-3a63-4c98-a5ca-824ca924d8be name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.316207279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e38037f2-3a63-4c98-a5ca-824ca924d8be name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.320006956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b4f12e1f60a8b578c5b0c4f72357e2847f59a9b954f85641ed57777fae2f0e7,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431806514239799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da629bd30c95063dfbbcb0aa6f9b60384b6991ca429c640b40faa57dec40c50,PodSandboxId:64388704da8e3553b4b7ba212150b8be8d2753b98093d1a05cff068ba60739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431792141859158,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b097d6fbe7bbb1643d909ef729007341ae7fc0ab1ec80207195fd5969955584,PodSandboxId:7e329baa51fcb84e5627e3f4b3a56c4c55cfbee4c10e2f831d562c3c1bb91e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431790767211760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a2043746af8af2e838c63550be9237045e88d206eb4789240487e7c28d4ee5,PodSandboxId:f14557114180d6c8c6216c427519e972cd04d7c1ccde151558efaa6fd672d534,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431777589121684,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4e1b35f17d2a1b81fa2d54c757a4b708b4dd37cc7488cc46d693473fcc9bb2,PodSandboxId:d7f7a206e6a5c23a188e65f23d2f395a14e9b7bfe57ef487c8006f7e1929c875,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431775227828638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f
2673b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a033698b7a59dfc49b808574e8dc26210f3ed2c47e3087a9aecc85370aebd4d,PodSandboxId:131a16a99e528bc1acdbaef31394df023ebb2890687d0b17bae7e4303cda43ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431769964651355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Anno
tations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c363b6d55bfb292ea99406be9e755bf39bd27ebf8477ddc9718c3ce73c120db,PodSandboxId:154615dd125abb01ce8020f111e0fa6d9c13f2a91bbfd12b9f0da953a568aa9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431769491957125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42
fe6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39448b46e3c640ebbdcaae3b540696d2c0b6c2262e2b1675d8fedda15d463d1,PodSandboxId:1259613dda6af4f7145c72e09d7ad3cdd2a234a8aa1245731d283c1296d5d939,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431769460262754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d
72513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001d412c8a1036be120207d7090f1950f08a2a012b11b430c1889b38d0c4edb7,PodSandboxId:96d321001a67e507187f6df1e748580b72e5866d3ab475e44fd3aa5fa7fd9592,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431769261739993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 831a3116,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e38037f2-3a63-4c98-a5ca-824ca924d8be name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.351575734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f74693df-16c4-4610-91e9-968e08a42969 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.351633013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f74693df-16c4-4610-91e9-968e08a42969 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.351902288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b4f12e1f60a8b578c5b0c4f72357e2847f59a9b954f85641ed57777fae2f0e7,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431806514239799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da629bd30c95063dfbbcb0aa6f9b60384b6991ca429c640b40faa57dec40c50,PodSandboxId:64388704da8e3553b4b7ba212150b8be8d2753b98093d1a05cff068ba60739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431792141859158,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b097d6fbe7bbb1643d909ef729007341ae7fc0ab1ec80207195fd5969955584,PodSandboxId:7e329baa51fcb84e5627e3f4b3a56c4c55cfbee4c10e2f831d562c3c1bb91e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431790767211760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a2043746af8af2e838c63550be9237045e88d206eb4789240487e7c28d4ee5,PodSandboxId:f14557114180d6c8c6216c427519e972cd04d7c1ccde151558efaa6fd672d534,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431777589121684,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73ce9a76a5bbd0d3e23368c634b78d327431b0efbea79d411db2d27e7d123e75,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1693431775357908474,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4e1b35f17d2a1b81fa2d54c757a4b708b4dd37cc7488cc46d693473fcc9bb2,PodSandboxId:d7f7a206e6a5c23a188e65f23d2f395a14e9b7bfe57ef487c8006f7e1929c875,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431775227828638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f26
73b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a033698b7a59dfc49b808574e8dc26210f3ed2c47e3087a9aecc85370aebd4d,PodSandboxId:131a16a99e528bc1acdbaef31394df023ebb2890687d0b17bae7e4303cda43ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431769964651355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annota
tions:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c363b6d55bfb292ea99406be9e755bf39bd27ebf8477ddc9718c3ce73c120db,PodSandboxId:154615dd125abb01ce8020f111e0fa6d9c13f2a91bbfd12b9f0da953a568aa9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431769491957125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe
6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39448b46e3c640ebbdcaae3b540696d2c0b6c2262e2b1675d8fedda15d463d1,PodSandboxId:1259613dda6af4f7145c72e09d7ad3cdd2a234a8aa1245731d283c1296d5d939,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431769460262754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72
513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001d412c8a1036be120207d7090f1950f08a2a012b11b430c1889b38d0c4edb7,PodSandboxId:96d321001a67e507187f6df1e748580b72e5866d3ab475e44fd3aa5fa7fd9592,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431769261739993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 831a3116,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f74693df-16c4-4610-91e9-968e08a42969 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.384910678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=861145ce-07de-45e8-8a0c-f80d85e51e90 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.384972382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=861145ce-07de-45e8-8a0c-f80d85e51e90 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.385200639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b4f12e1f60a8b578c5b0c4f72357e2847f59a9b954f85641ed57777fae2f0e7,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431806514239799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da629bd30c95063dfbbcb0aa6f9b60384b6991ca429c640b40faa57dec40c50,PodSandboxId:64388704da8e3553b4b7ba212150b8be8d2753b98093d1a05cff068ba60739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431792141859158,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b097d6fbe7bbb1643d909ef729007341ae7fc0ab1ec80207195fd5969955584,PodSandboxId:7e329baa51fcb84e5627e3f4b3a56c4c55cfbee4c10e2f831d562c3c1bb91e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431790767211760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a2043746af8af2e838c63550be9237045e88d206eb4789240487e7c28d4ee5,PodSandboxId:f14557114180d6c8c6216c427519e972cd04d7c1ccde151558efaa6fd672d534,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431777589121684,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73ce9a76a5bbd0d3e23368c634b78d327431b0efbea79d411db2d27e7d123e75,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1693431775357908474,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4e1b35f17d2a1b81fa2d54c757a4b708b4dd37cc7488cc46d693473fcc9bb2,PodSandboxId:d7f7a206e6a5c23a188e65f23d2f395a14e9b7bfe57ef487c8006f7e1929c875,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431775227828638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f26
73b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a033698b7a59dfc49b808574e8dc26210f3ed2c47e3087a9aecc85370aebd4d,PodSandboxId:131a16a99e528bc1acdbaef31394df023ebb2890687d0b17bae7e4303cda43ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431769964651355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annota
tions:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c363b6d55bfb292ea99406be9e755bf39bd27ebf8477ddc9718c3ce73c120db,PodSandboxId:154615dd125abb01ce8020f111e0fa6d9c13f2a91bbfd12b9f0da953a568aa9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431769491957125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe
6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39448b46e3c640ebbdcaae3b540696d2c0b6c2262e2b1675d8fedda15d463d1,PodSandboxId:1259613dda6af4f7145c72e09d7ad3cdd2a234a8aa1245731d283c1296d5d939,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431769460262754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72
513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001d412c8a1036be120207d7090f1950f08a2a012b11b430c1889b38d0c4edb7,PodSandboxId:96d321001a67e507187f6df1e748580b72e5866d3ab475e44fd3aa5fa7fd9592,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431769261739993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 831a3116,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=861145ce-07de-45e8-8a0c-f80d85e51e90 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.416741302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e1d9339b-edee-4b1d-91aa-5aa7d19867fb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.416831405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e1d9339b-edee-4b1d-91aa-5aa7d19867fb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.417065647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b4f12e1f60a8b578c5b0c4f72357e2847f59a9b954f85641ed57777fae2f0e7,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431806514239799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da629bd30c95063dfbbcb0aa6f9b60384b6991ca429c640b40faa57dec40c50,PodSandboxId:64388704da8e3553b4b7ba212150b8be8d2753b98093d1a05cff068ba60739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431792141859158,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b097d6fbe7bbb1643d909ef729007341ae7fc0ab1ec80207195fd5969955584,PodSandboxId:7e329baa51fcb84e5627e3f4b3a56c4c55cfbee4c10e2f831d562c3c1bb91e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431790767211760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a2043746af8af2e838c63550be9237045e88d206eb4789240487e7c28d4ee5,PodSandboxId:f14557114180d6c8c6216c427519e972cd04d7c1ccde151558efaa6fd672d534,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431777589121684,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73ce9a76a5bbd0d3e23368c634b78d327431b0efbea79d411db2d27e7d123e75,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1693431775357908474,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4e1b35f17d2a1b81fa2d54c757a4b708b4dd37cc7488cc46d693473fcc9bb2,PodSandboxId:d7f7a206e6a5c23a188e65f23d2f395a14e9b7bfe57ef487c8006f7e1929c875,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431775227828638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f26
73b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a033698b7a59dfc49b808574e8dc26210f3ed2c47e3087a9aecc85370aebd4d,PodSandboxId:131a16a99e528bc1acdbaef31394df023ebb2890687d0b17bae7e4303cda43ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431769964651355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annota
tions:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c363b6d55bfb292ea99406be9e755bf39bd27ebf8477ddc9718c3ce73c120db,PodSandboxId:154615dd125abb01ce8020f111e0fa6d9c13f2a91bbfd12b9f0da953a568aa9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431769491957125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe
6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39448b46e3c640ebbdcaae3b540696d2c0b6c2262e2b1675d8fedda15d463d1,PodSandboxId:1259613dda6af4f7145c72e09d7ad3cdd2a234a8aa1245731d283c1296d5d939,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431769460262754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72
513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001d412c8a1036be120207d7090f1950f08a2a012b11b430c1889b38d0c4edb7,PodSandboxId:96d321001a67e507187f6df1e748580b72e5866d3ab475e44fd3aa5fa7fd9592,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431769261739993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 831a3116,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e1d9339b-edee-4b1d-91aa-5aa7d19867fb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.417844081Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=8d7de133-bb38-447e-b27d-fcb09fe0e4a3 name=/runtime.v1.RuntimeService/Status
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.417957438Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=8d7de133-bb38-447e-b27d-fcb09fe0e4a3 name=/runtime.v1.RuntimeService/Status
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.458967751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e82f1b4d-6220-4a50-908a-f118206c26d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.459038715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e82f1b4d-6220-4a50-908a-f118206c26d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.459270918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b4f12e1f60a8b578c5b0c4f72357e2847f59a9b954f85641ed57777fae2f0e7,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431806514239799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da629bd30c95063dfbbcb0aa6f9b60384b6991ca429c640b40faa57dec40c50,PodSandboxId:64388704da8e3553b4b7ba212150b8be8d2753b98093d1a05cff068ba60739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431792141859158,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b097d6fbe7bbb1643d909ef729007341ae7fc0ab1ec80207195fd5969955584,PodSandboxId:7e329baa51fcb84e5627e3f4b3a56c4c55cfbee4c10e2f831d562c3c1bb91e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431790767211760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a2043746af8af2e838c63550be9237045e88d206eb4789240487e7c28d4ee5,PodSandboxId:f14557114180d6c8c6216c427519e972cd04d7c1ccde151558efaa6fd672d534,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431777589121684,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73ce9a76a5bbd0d3e23368c634b78d327431b0efbea79d411db2d27e7d123e75,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1693431775357908474,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4e1b35f17d2a1b81fa2d54c757a4b708b4dd37cc7488cc46d693473fcc9bb2,PodSandboxId:d7f7a206e6a5c23a188e65f23d2f395a14e9b7bfe57ef487c8006f7e1929c875,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431775227828638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f26
73b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a033698b7a59dfc49b808574e8dc26210f3ed2c47e3087a9aecc85370aebd4d,PodSandboxId:131a16a99e528bc1acdbaef31394df023ebb2890687d0b17bae7e4303cda43ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431769964651355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annota
tions:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c363b6d55bfb292ea99406be9e755bf39bd27ebf8477ddc9718c3ce73c120db,PodSandboxId:154615dd125abb01ce8020f111e0fa6d9c13f2a91bbfd12b9f0da953a568aa9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431769491957125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe
6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39448b46e3c640ebbdcaae3b540696d2c0b6c2262e2b1675d8fedda15d463d1,PodSandboxId:1259613dda6af4f7145c72e09d7ad3cdd2a234a8aa1245731d283c1296d5d939,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431769460262754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72
513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001d412c8a1036be120207d7090f1950f08a2a012b11b430c1889b38d0c4edb7,PodSandboxId:96d321001a67e507187f6df1e748580b72e5866d3ab475e44fd3aa5fa7fd9592,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431769261739993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 831a3116,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e82f1b4d-6220-4a50-908a-f118206c26d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.492210084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d8c9544-21b7-4637-a470-92edda9537c7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.492273630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d8c9544-21b7-4637-a470-92edda9537c7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 21:46:38 multinode-752665 crio[706]: time="2023-08-30 21:46:38.492579956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b4f12e1f60a8b578c5b0c4f72357e2847f59a9b954f85641ed57777fae2f0e7,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693431806514239799,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da629bd30c95063dfbbcb0aa6f9b60384b6991ca429c640b40faa57dec40c50,PodSandboxId:64388704da8e3553b4b7ba212150b8be8d2753b98093d1a05cff068ba60739d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1693431792141859158,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-mzmpx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd37765-b8e2-4e0c-8e64-71d975f27bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 383a08c9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b097d6fbe7bbb1643d909ef729007341ae7fc0ab1ec80207195fd5969955584,PodSandboxId:7e329baa51fcb84e5627e3f4b3a56c4c55cfbee4c10e2f831d562c3c1bb91e8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693431790767211760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zcppg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4742270b-6c64-411b-bfb6-8c53211aa106,},Annotations:map[string]string{io.kubernetes.container.hash: 971c0ac6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a2043746af8af2e838c63550be9237045e88d206eb4789240487e7c28d4ee5,PodSandboxId:f14557114180d6c8c6216c427519e972cd04d7c1ccde151558efaa6fd672d534,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1693431777589121684,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x5kk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2fdd77f6-856a-4400-b881-210549c588e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe61f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73ce9a76a5bbd0d3e23368c634b78d327431b0efbea79d411db2d27e7d123e75,PodSandboxId:52cb52b833daf2e54f97a2c0b335d1a6b09a0d7157d0eb3e8948da699580a5fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1693431775357908474,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 67db5a8a-290a-40a7-b42e-212d99db812a,},Annotations:map[string]string{io.kubernetes.container.hash: 65958f9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4e1b35f17d2a1b81fa2d54c757a4b708b4dd37cc7488cc46d693473fcc9bb2,PodSandboxId:d7f7a206e6a5c23a188e65f23d2f395a14e9b7bfe57ef487c8006f7e1929c875,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693431775227828638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vltx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24ee271e-5778-4d0c-ab2c-77426f26
73b3,},Annotations:map[string]string{io.kubernetes.container.hash: eeffbbc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a033698b7a59dfc49b808574e8dc26210f3ed2c47e3087a9aecc85370aebd4d,PodSandboxId:131a16a99e528bc1acdbaef31394df023ebb2890687d0b17bae7e4303cda43ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693431769964651355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2957dd3360cebd27e85f1db4b73fa253,},Annota
tions:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c363b6d55bfb292ea99406be9e755bf39bd27ebf8477ddc9718c3ce73c120db,PodSandboxId:154615dd125abb01ce8020f111e0fa6d9c13f2a91bbfd12b9f0da953a568aa9b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693431769491957125,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c398e6beaac5b42fe
6a53cb0b1863506,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e39448b46e3c640ebbdcaae3b540696d2c0b6c2262e2b1675d8fedda15d463d1,PodSandboxId:1259613dda6af4f7145c72e09d7ad3cdd2a234a8aa1245731d283c1296d5d939,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693431769460262754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 063d73d4de1cf2feb4ba920354d72
513,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9ee7e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001d412c8a1036be120207d7090f1950f08a2a012b11b430c1889b38d0c4edb7,PodSandboxId:96d321001a67e507187f6df1e748580b72e5866d3ab475e44fd3aa5fa7fd9592,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693431769261739993,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-752665,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d44ed339e19dd41d07034008e5b52b3,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 831a3116,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d8c9544-21b7-4637-a470-92edda9537c7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	1b4f12e1f60a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   52cb52b833daf
	8da629bd30c95       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   64388704da8e3
	6b097d6fbe7bb       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   7e329baa51fcb
	b3a2043746af8       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      3 minutes ago       Running             kindnet-cni               1                   f14557114180d
	73ce9a76a5bbd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   52cb52b833daf
	ce4e1b35f17d2       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      3 minutes ago       Running             kube-proxy                1                   d7f7a206e6a5c
	5a033698b7a59       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      3 minutes ago       Running             kube-scheduler            1                   131a16a99e528
	9c363b6d55bfb       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      3 minutes ago       Running             kube-controller-manager   1                   154615dd125ab
	e39448b46e3c6       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      3 minutes ago       Running             kube-apiserver            1                   1259613dda6af
	001d412c8a103       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   96d321001a67e
	
	* 
	* ==> coredns [6b097d6fbe7bbb1643d909ef729007341ae7fc0ab1ec80207195fd5969955584] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59153 - 52934 "HINFO IN 5664568617260892101.7868403556306888732. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011718446s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-752665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-752665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=multinode-752665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T21_32_36_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:32:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-752665
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 21:46:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 21:43:24 +0000   Wed, 30 Aug 2023 21:32:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 21:43:24 +0000   Wed, 30 Aug 2023 21:32:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 21:43:24 +0000   Wed, 30 Aug 2023 21:32:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 21:43:24 +0000   Wed, 30 Aug 2023 21:43:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.20
	  Hostname:    multinode-752665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a335ffec70c54a6faef870bcf3c0d15e
	  System UUID:                a335ffec-70c5-4a6f-aef8-70bcf3c0d15e
	  Boot ID:                    19a688a5-e406-4712-a21f-8d36f7137e17
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-mzmpx                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-zcppg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-752665                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-x5kk4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-752665             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-752665    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-vltx5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-752665             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-752665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-752665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-752665 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-752665 event: Registered Node multinode-752665 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-752665 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-752665 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-752665 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-752665 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-752665 event: Registered Node multinode-752665 in Controller
	
	
	Name:               multinode-752665-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-752665-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:44:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-752665-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 21:46:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 21:44:51 +0000   Wed, 30 Aug 2023 21:44:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 21:44:51 +0000   Wed, 30 Aug 2023 21:44:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 21:44:51 +0000   Wed, 30 Aug 2023 21:44:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 21:44:51 +0000   Wed, 30 Aug 2023 21:44:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.46
	  Hostname:    multinode-752665-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4c536b276364b2c8c36c1397595c512
	  System UUID:                c4c536b2-7636-4b2c-8c36-c1397595c512
	  Boot ID:                    2969c860-c01d-40ee-9780-4d6aaa6f43b6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-67j2j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-4q5fx               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-5twl5            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From        Message
	  ----     ------                   ----                 ----        -------
	  Normal   Starting                 13m                  kube-proxy  
	  Normal   Starting                 105s                 kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)    kubelet     Node multinode-752665-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)    kubelet     Node multinode-752665-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)    kubelet     Node multinode-752665-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                  kubelet     Node multinode-752665-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m53s                kubelet     Node multinode-752665-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m8s (x2 over 3m8s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 107s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  107s (x2 over 107s)  kubelet     Node multinode-752665-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    107s (x2 over 107s)  kubelet     Node multinode-752665-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s (x2 over 107s)  kubelet     Node multinode-752665-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  107s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                107s                 kubelet     Node multinode-752665-m02 status is now: NodeReady
	
	
	Name:               multinode-752665-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-752665-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:46:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-752665-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 21:46:34 +0000   Wed, 30 Aug 2023 21:46:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 21:46:34 +0000   Wed, 30 Aug 2023 21:46:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 21:46:34 +0000   Wed, 30 Aug 2023 21:46:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 21:46:34 +0000   Wed, 30 Aug 2023 21:46:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.30
	  Hostname:    multinode-752665-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb641496a28849c2a4e151d7053b0ce6
	  System UUID:                bb641496-a288-49c2-a4e1-51d7053b0ce6
	  Boot ID:                    215894c4-0559-412a-a09f-611e3f69f71b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-f5rjq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kindnet-d4xrz               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-jwftn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 7s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-752665-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-752665-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-752665-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-752665-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-752665-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-752665-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-752665-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             71s                kubelet     Node multinode-752665-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        37s (x2 over 97s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeReady                10s (x2 over 11m)  kubelet     Node multinode-752665-m03 status is now: NodeReady
	  Normal   Starting                 4s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  4s (x2 over 4s)    kubelet     Node multinode-752665-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s (x2 over 4s)    kubelet     Node multinode-752665-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s (x2 over 4s)    kubelet     Node multinode-752665-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                4s                 kubelet     Node multinode-752665-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug30 21:42] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074042] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.299783] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.346067] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136468] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.434562] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.724407] systemd-fstab-generator[631]: Ignoring "noauto" for root device
	[  +0.117780] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.136417] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.094676] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.221950] systemd-fstab-generator[690]: Ignoring "noauto" for root device
	[ +17.079067] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[Aug30 21:43] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [001d412c8a1036be120207d7090f1950f08a2a012b11b430c1889b38d0c4edb7] <==
	* {"level":"info","ts":"2023-08-30T21:42:51.16514Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-30T21:42:51.165167Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-30T21:42:51.165354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 switched to configuration voters=(11351028140387178485)"}
	{"level":"info","ts":"2023-08-30T21:42:51.165417Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e50fb330f7c278b","local-member-id":"9d86f3f40f3d97f5","added-peer-id":"9d86f3f40f3d97f5","added-peer-peer-urls":["https://192.168.39.20:2380"]}
	{"level":"info","ts":"2023-08-30T21:42:51.165711Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e50fb330f7c278b","local-member-id":"9d86f3f40f3d97f5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T21:42:51.165824Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T21:42:51.169028Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-30T21:42:51.170436Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.20:2380"}
	{"level":"info","ts":"2023-08-30T21:42:51.170646Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.20:2380"}
	{"level":"info","ts":"2023-08-30T21:42:51.170797Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9d86f3f40f3d97f5","initial-advertise-peer-urls":["https://192.168.39.20:2380"],"listen-peer-urls":["https://192.168.39.20:2380"],"advertise-client-urls":["https://192.168.39.20:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.20:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-30T21:42:51.170847Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-30T21:42:52.132888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-30T21:42:52.133053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-30T21:42:52.133095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 received MsgPreVoteResp from 9d86f3f40f3d97f5 at term 2"}
	{"level":"info","ts":"2023-08-30T21:42:52.133133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 became candidate at term 3"}
	{"level":"info","ts":"2023-08-30T21:42:52.133158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 received MsgVoteResp from 9d86f3f40f3d97f5 at term 3"}
	{"level":"info","ts":"2023-08-30T21:42:52.133185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9d86f3f40f3d97f5 became leader at term 3"}
	{"level":"info","ts":"2023-08-30T21:42:52.133214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9d86f3f40f3d97f5 elected leader 9d86f3f40f3d97f5 at term 3"}
	{"level":"info","ts":"2023-08-30T21:42:52.13491Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9d86f3f40f3d97f5","local-member-attributes":"{Name:multinode-752665 ClientURLs:[https://192.168.39.20:2379]}","request-path":"/0/members/9d86f3f40f3d97f5/attributes","cluster-id":"e50fb330f7c278b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T21:42:52.135136Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T21:42:52.136223Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-30T21:42:52.143187Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T21:42:52.144077Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.20:2379"}
	{"level":"info","ts":"2023-08-30T21:42:52.150426Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T21:42:52.154618Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  21:46:38 up 4 min,  0 users,  load average: 0.09, 0.24, 0.12
	Linux multinode-752665 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [b3a2043746af8af2e838c63550be9237045e88d206eb4789240487e7c28d4ee5] <==
	* I0830 21:45:49.096959       1 main.go:250] Node multinode-752665-m03 has CIDR [10.244.3.0/24] 
	I0830 21:45:59.104933       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0830 21:45:59.104984       1 main.go:227] handling current node
	I0830 21:45:59.105018       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0830 21:45:59.105024       1 main.go:250] Node multinode-752665-m02 has CIDR [10.244.1.0/24] 
	I0830 21:45:59.105143       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0830 21:45:59.105181       1 main.go:250] Node multinode-752665-m03 has CIDR [10.244.3.0/24] 
	I0830 21:46:09.117435       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0830 21:46:09.117788       1 main.go:227] handling current node
	I0830 21:46:09.117836       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0830 21:46:09.117884       1 main.go:250] Node multinode-752665-m02 has CIDR [10.244.1.0/24] 
	I0830 21:46:09.118051       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0830 21:46:09.118085       1 main.go:250] Node multinode-752665-m03 has CIDR [10.244.3.0/24] 
	I0830 21:46:19.130675       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0830 21:46:19.130802       1 main.go:227] handling current node
	I0830 21:46:19.130837       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0830 21:46:19.130866       1 main.go:250] Node multinode-752665-m02 has CIDR [10.244.1.0/24] 
	I0830 21:46:19.131020       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0830 21:46:19.131054       1 main.go:250] Node multinode-752665-m03 has CIDR [10.244.3.0/24] 
	I0830 21:46:29.142315       1 main.go:223] Handling node with IPs: map[192.168.39.20:{}]
	I0830 21:46:29.142633       1 main.go:227] handling current node
	I0830 21:46:29.142669       1 main.go:223] Handling node with IPs: map[192.168.39.46:{}]
	I0830 21:46:29.142690       1 main.go:250] Node multinode-752665-m02 has CIDR [10.244.1.0/24] 
	I0830 21:46:29.142839       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0830 21:46:29.142863       1 main.go:250] Node multinode-752665-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [e39448b46e3c640ebbdcaae3b540696d2c0b6c2262e2b1675d8fedda15d463d1] <==
	* I0830 21:42:53.555248       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0830 21:42:53.555375       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0830 21:42:53.555615       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0830 21:42:53.688078       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0830 21:42:53.696247       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0830 21:42:53.696375       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0830 21:42:53.729706       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0830 21:42:53.742716       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0830 21:42:53.751109       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0830 21:42:53.751637       1 shared_informer.go:318] Caches are synced for configmaps
	I0830 21:42:53.752658       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0830 21:42:53.752705       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0830 21:42:53.756731       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0830 21:42:53.757791       1 aggregator.go:166] initial CRD sync complete...
	I0830 21:42:53.757837       1 autoregister_controller.go:141] Starting autoregister controller
	I0830 21:42:53.757851       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0830 21:42:53.757858       1 cache.go:39] Caches are synced for autoregister controller
	I0830 21:42:54.553060       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0830 21:42:56.424569       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0830 21:42:56.560208       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0830 21:42:56.571782       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0830 21:42:56.645269       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0830 21:42:56.656827       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0830 21:43:06.989839       1 controller.go:624] quota admission added evaluator for: endpoints
	I0830 21:43:07.040178       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [9c363b6d55bfb292ea99406be9e755bf39bd27ebf8477ddc9718c3ce73c120db] <==
	* I0830 21:44:51.411375       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-j4rx4" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-j4rx4"
	I0830 21:44:51.425056       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-752665-m02" podCIDRs=["10.244.1.0/24"]
	I0830 21:44:51.774904       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-752665-m02"
	I0830 21:44:52.310801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.733µs"
	I0830 21:44:56.706017       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-j4rx4" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-j4rx4"
	I0830 21:45:05.590291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.14µs"
	I0830 21:45:06.190979       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.302µs"
	I0830 21:45:06.194770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.668µs"
	I0830 21:45:27.778058       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-752665-m02"
	I0830 21:46:28.572166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="124.358µs"
	I0830 21:46:28.746032       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-752665-m02"
	I0830 21:46:30.735147       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-67j2j"
	I0830 21:46:30.742119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="24.561147ms"
	I0830 21:46:30.793745       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.438265ms"
	I0830 21:46:30.806705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.784703ms"
	I0830 21:46:30.806841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.157µs"
	I0830 21:46:31.735466       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-f5rjq" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-f5rjq"
	I0830 21:46:32.469189       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.279519ms"
	I0830 21:46:32.469474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="99.784µs"
	I0830 21:46:33.746031       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-752665-m02"
	I0830 21:46:34.389611       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-752665-m02"
	I0830 21:46:34.390716       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-752665-m03\" does not exist"
	I0830 21:46:34.401411       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-752665-m03" podCIDRs=["10.244.2.0/24"]
	I0830 21:46:34.447118       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-752665-m03"
	I0830 21:46:35.322120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="92.564µs"
	
	* 
	* ==> kube-proxy [ce4e1b35f17d2a1b81fa2d54c757a4b708b4dd37cc7488cc46d693473fcc9bb2] <==
	* I0830 21:42:55.602836       1 server_others.go:69] "Using iptables proxy"
	I0830 21:42:55.633333       1 node.go:141] Successfully retrieved node IP: 192.168.39.20
	I0830 21:42:55.767761       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0830 21:42:55.767804       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0830 21:42:55.770192       1 server_others.go:152] "Using iptables Proxier"
	I0830 21:42:55.770230       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 21:42:55.770367       1 server.go:846] "Version info" version="v1.28.1"
	I0830 21:42:55.770376       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 21:42:55.775881       1 config.go:188] "Starting service config controller"
	I0830 21:42:55.775904       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 21:42:55.775921       1 config.go:97] "Starting endpoint slice config controller"
	I0830 21:42:55.775924       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 21:42:55.776391       1 config.go:315] "Starting node config controller"
	I0830 21:42:55.776397       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 21:42:55.886606       1 shared_informer.go:318] Caches are synced for service config
	I0830 21:42:55.893659       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 21:42:55.894353       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [5a033698b7a59dfc49b808574e8dc26210f3ed2c47e3087a9aecc85370aebd4d] <==
	* I0830 21:42:51.766922       1 serving.go:348] Generated self-signed cert in-memory
	W0830 21:42:53.623943       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0830 21:42:53.624063       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 21:42:53.624074       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0830 21:42:53.624084       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0830 21:42:53.690263       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0830 21:42:53.690331       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 21:42:53.695144       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0830 21:42:53.695197       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 21:42:53.706408       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0830 21:42:53.706598       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0830 21:42:53.796047       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 21:42:22 UTC, ends at Wed 2023-08-30 21:46:39 UTC. --
	Aug 30 21:42:58 multinode-752665 kubelet[915]: E0830 21:42:58.314919     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-mzmpx" podUID="1fd37765-b8e2-4e0c-8e64-71d975f27bf8"
	Aug 30 21:42:58 multinode-752665 kubelet[915]: E0830 21:42:58.347743     915 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Aug 30 21:42:59 multinode-752665 kubelet[915]: E0830 21:42:59.315035     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-zcppg" podUID="4742270b-6c64-411b-bfb6-8c53211aa106"
	Aug 30 21:43:00 multinode-752665 kubelet[915]: E0830 21:43:00.314234     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-mzmpx" podUID="1fd37765-b8e2-4e0c-8e64-71d975f27bf8"
	Aug 30 21:43:01 multinode-752665 kubelet[915]: E0830 21:43:01.314653     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-zcppg" podUID="4742270b-6c64-411b-bfb6-8c53211aa106"
	Aug 30 21:43:01 multinode-752665 kubelet[915]: E0830 21:43:01.872567     915 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 30 21:43:01 multinode-752665 kubelet[915]: E0830 21:43:01.872694     915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4742270b-6c64-411b-bfb6-8c53211aa106-config-volume podName:4742270b-6c64-411b-bfb6-8c53211aa106 nodeName:}" failed. No retries permitted until 2023-08-30 21:43:09.872678585 +0000 UTC m=+21.832388386 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4742270b-6c64-411b-bfb6-8c53211aa106-config-volume") pod "coredns-5dd5756b68-zcppg" (UID: "4742270b-6c64-411b-bfb6-8c53211aa106") : object "kube-system"/"coredns" not registered
	Aug 30 21:43:01 multinode-752665 kubelet[915]: E0830 21:43:01.973175     915 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Aug 30 21:43:01 multinode-752665 kubelet[915]: E0830 21:43:01.973234     915 projected.go:198] Error preparing data for projected volume kube-api-access-sm9dm for pod default/busybox-5bc68d56bd-mzmpx: object "default"/"kube-root-ca.crt" not registered
	Aug 30 21:43:01 multinode-752665 kubelet[915]: E0830 21:43:01.973317     915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1fd37765-b8e2-4e0c-8e64-71d975f27bf8-kube-api-access-sm9dm podName:1fd37765-b8e2-4e0c-8e64-71d975f27bf8 nodeName:}" failed. No retries permitted until 2023-08-30 21:43:09.973298692 +0000 UTC m=+21.933008494 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-sm9dm" (UniqueName: "kubernetes.io/projected/1fd37765-b8e2-4e0c-8e64-71d975f27bf8-kube-api-access-sm9dm") pod "busybox-5bc68d56bd-mzmpx" (UID: "1fd37765-b8e2-4e0c-8e64-71d975f27bf8") : object "default"/"kube-root-ca.crt" not registered
	Aug 30 21:43:02 multinode-752665 kubelet[915]: E0830 21:43:02.314601     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-mzmpx" podUID="1fd37765-b8e2-4e0c-8e64-71d975f27bf8"
	Aug 30 21:43:03 multinode-752665 kubelet[915]: E0830 21:43:03.314676     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-zcppg" podUID="4742270b-6c64-411b-bfb6-8c53211aa106"
	Aug 30 21:43:26 multinode-752665 kubelet[915]: I0830 21:43:26.484132     915 scope.go:117] "RemoveContainer" containerID="73ce9a76a5bbd0d3e23368c634b78d327431b0efbea79d411db2d27e7d123e75"
	Aug 30 21:43:48 multinode-752665 kubelet[915]: E0830 21:43:48.338930     915 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 21:43:48 multinode-752665 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 21:43:48 multinode-752665 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 21:43:48 multinode-752665 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 21:44:48 multinode-752665 kubelet[915]: E0830 21:44:48.341347     915 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 21:44:48 multinode-752665 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 21:44:48 multinode-752665 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 21:44:48 multinode-752665 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 21:45:48 multinode-752665 kubelet[915]: E0830 21:45:48.340001     915 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 21:45:48 multinode-752665 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 21:45:48 multinode-752665 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 21:45:48 multinode-752665 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-752665 -n multinode-752665
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-752665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (688.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 stop
E0830 21:46:49.715266  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:46:57.077034  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-752665 stop: exit status 82 (2m1.09373799s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-752665"  ...
	* Stopping node "multinode-752665"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-752665 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-752665 status: exit status 3 (18.678139712s)

                                                
                                                
-- stdout --
	multinode-752665
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-752665-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 21:49:01.492215  980819 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.20:22: connect: no route to host
	E0830 21:49:01.492303  980819 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.20:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-752665 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-752665 -n multinode-752665
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-752665 -n multinode-752665: exit status 3 (3.196464136s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 21:49:04.852125  980911 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.20:22: connect: no route to host
	E0830 21:49:04.852154  980911 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.20:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-752665" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.97s)

                                                
                                    
x
+
TestPreload (178.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-229573 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-229573 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.694720654s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-229573 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-229573 image pull gcr.io/k8s-minikube/busybox: (1.142297174s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-229573
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-229573: (9.107398863s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-229573 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0830 21:59:22.734438  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:59:52.761531  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-229573 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.553597496s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-229573 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-08-30 22:00:21.166102934 +0000 UTC m=+3063.358852858
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-229573 -n test-preload-229573
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-229573 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-229573 logs -n 25: (1.003732603s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-752665 ssh -n                                                                 | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n multinode-752665 sudo cat                                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | /home/docker/cp-test_multinode-752665-m03_multinode-752665.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-752665 cp multinode-752665-m03:/home/docker/cp-test.txt                       | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m02:/home/docker/cp-test_multinode-752665-m03_multinode-752665-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n                                                                 | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | multinode-752665-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-752665 ssh -n multinode-752665-m02 sudo cat                                   | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	|         | /home/docker/cp-test_multinode-752665-m03_multinode-752665-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-752665 node stop m03                                                          | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:34 UTC |
	| node    | multinode-752665 node start                                                             | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:34 UTC | 30 Aug 23 21:35 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-752665                                                                | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:35 UTC |                     |
	| stop    | -p multinode-752665                                                                     | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:35 UTC |                     |
	| start   | -p multinode-752665                                                                     | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC | 30 Aug 23 21:46 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-752665                                                                | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:46 UTC |                     |
	| node    | multinode-752665 node delete                                                            | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:46 UTC | 30 Aug 23 21:46 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-752665 stop                                                                   | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:46 UTC |                     |
	| start   | -p multinode-752665                                                                     | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:49 UTC | 30 Aug 23 21:56 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-752665                                                                | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:56 UTC |                     |
	| start   | -p multinode-752665-m02                                                                 | multinode-752665-m02 | jenkins | v1.31.2 | 30 Aug 23 21:56 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-752665-m03                                                                 | multinode-752665-m03 | jenkins | v1.31.2 | 30 Aug 23 21:56 UTC | 30 Aug 23 21:57 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-752665                                                                 | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:57 UTC |                     |
	| delete  | -p multinode-752665-m03                                                                 | multinode-752665-m03 | jenkins | v1.31.2 | 30 Aug 23 21:57 UTC | 30 Aug 23 21:57 UTC |
	| delete  | -p multinode-752665                                                                     | multinode-752665     | jenkins | v1.31.2 | 30 Aug 23 21:57 UTC | 30 Aug 23 21:57 UTC |
	| start   | -p test-preload-229573                                                                  | test-preload-229573  | jenkins | v1.31.2 | 30 Aug 23 21:57 UTC | 30 Aug 23 21:58 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-229573 image pull                                                          | test-preload-229573  | jenkins | v1.31.2 | 30 Aug 23 21:58 UTC | 30 Aug 23 21:58 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-229573                                                                  | test-preload-229573  | jenkins | v1.31.2 | 30 Aug 23 21:58 UTC | 30 Aug 23 21:59 UTC |
	| start   | -p test-preload-229573                                                                  | test-preload-229573  | jenkins | v1.31.2 | 30 Aug 23 21:59 UTC | 30 Aug 23 22:00 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-229573 image list                                                          | test-preload-229573  | jenkins | v1.31.2 | 30 Aug 23 22:00 UTC | 30 Aug 23 22:00 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:59:06
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:59:06.426059  983589 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:59:06.426233  983589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:59:06.426243  983589 out.go:309] Setting ErrFile to fd 2...
	I0830 21:59:06.426250  983589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:59:06.426463  983589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 21:59:06.427013  983589 out.go:303] Setting JSON to false
	I0830 21:59:06.427989  983589 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13294,"bootTime":1693419453,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 21:59:06.428045  983589 start.go:138] virtualization: kvm guest
	I0830 21:59:06.430601  983589 out.go:177] * [test-preload-229573] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 21:59:06.432019  983589 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 21:59:06.433325  983589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:59:06.432085  983589 notify.go:220] Checking for updates...
	I0830 21:59:06.435806  983589 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:59:06.437318  983589 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:59:06.438840  983589 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 21:59:06.440244  983589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:59:06.442271  983589 config.go:182] Loaded profile config "test-preload-229573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0830 21:59:06.442864  983589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:59:06.442938  983589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:59:06.457910  983589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44259
	I0830 21:59:06.458323  983589 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:59:06.458895  983589 main.go:141] libmachine: Using API Version  1
	I0830 21:59:06.458916  983589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:59:06.459198  983589 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:59:06.459399  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 21:59:06.461646  983589 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 21:59:06.463103  983589 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:59:06.463393  983589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:59:06.463434  983589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:59:06.477691  983589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I0830 21:59:06.478125  983589 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:59:06.478616  983589 main.go:141] libmachine: Using API Version  1
	I0830 21:59:06.478637  983589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:59:06.478985  983589 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:59:06.479146  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 21:59:06.513889  983589 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 21:59:06.515277  983589 start.go:298] selected driver: kvm2
	I0830 21:59:06.515290  983589 start.go:902] validating driver "kvm2" against &{Name:test-preload-229573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-229573 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:59:06.515416  983589 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:59:06.516353  983589 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:59:06.516450  983589 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 21:59:06.530667  983589 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 21:59:06.530958  983589 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 21:59:06.530994  983589 cni.go:84] Creating CNI manager for ""
	I0830 21:59:06.531005  983589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 21:59:06.531015  983589 start_flags.go:319] config:
	{Name:test-preload-229573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-229573 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:59:06.531182  983589 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:59:06.533309  983589 out.go:177] * Starting control plane node test-preload-229573 in cluster test-preload-229573
	I0830 21:59:06.534728  983589 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0830 21:59:06.558965  983589 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0830 21:59:06.558991  983589 cache.go:57] Caching tarball of preloaded images
	I0830 21:59:06.559190  983589 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0830 21:59:06.561240  983589 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0830 21:59:06.562651  983589 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0830 21:59:06.590566  983589 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0830 21:59:10.097853  983589 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0830 21:59:10.097954  983589 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0830 21:59:10.964148  983589 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I0830 21:59:10.964356  983589 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/config.json ...
	I0830 21:59:10.964587  983589 start.go:365] acquiring machines lock for test-preload-229573: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 21:59:10.964651  983589 start.go:369] acquired machines lock for "test-preload-229573" in 42.759µs
	I0830 21:59:10.964665  983589 start.go:96] Skipping create...Using existing machine configuration
	I0830 21:59:10.964671  983589 fix.go:54] fixHost starting: 
	I0830 21:59:10.964931  983589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:59:10.964966  983589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:59:10.979424  983589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35261
	I0830 21:59:10.979885  983589 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:59:10.980368  983589 main.go:141] libmachine: Using API Version  1
	I0830 21:59:10.980395  983589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:59:10.980711  983589 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:59:10.980891  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 21:59:10.981064  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetState
	I0830 21:59:10.982577  983589 fix.go:102] recreateIfNeeded on test-preload-229573: state=Stopped err=<nil>
	I0830 21:59:10.982597  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	W0830 21:59:10.982731  983589 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 21:59:10.984875  983589 out.go:177] * Restarting existing kvm2 VM for "test-preload-229573" ...
	I0830 21:59:10.986206  983589 main.go:141] libmachine: (test-preload-229573) Calling .Start
	I0830 21:59:10.986346  983589 main.go:141] libmachine: (test-preload-229573) Ensuring networks are active...
	I0830 21:59:10.987138  983589 main.go:141] libmachine: (test-preload-229573) Ensuring network default is active
	I0830 21:59:10.987487  983589 main.go:141] libmachine: (test-preload-229573) Ensuring network mk-test-preload-229573 is active
	I0830 21:59:10.987914  983589 main.go:141] libmachine: (test-preload-229573) Getting domain xml...
	I0830 21:59:10.988635  983589 main.go:141] libmachine: (test-preload-229573) Creating domain...
	I0830 21:59:12.184611  983589 main.go:141] libmachine: (test-preload-229573) Waiting to get IP...
	I0830 21:59:12.185495  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:12.185889  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:12.185942  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:12.185877  983635 retry.go:31] will retry after 229.067465ms: waiting for machine to come up
	I0830 21:59:12.416300  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:12.416682  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:12.416703  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:12.416663  983635 retry.go:31] will retry after 343.720994ms: waiting for machine to come up
	I0830 21:59:12.761944  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:12.762411  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:12.762444  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:12.762346  983635 retry.go:31] will retry after 378.67804ms: waiting for machine to come up
	I0830 21:59:13.142885  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:13.143290  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:13.143318  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:13.143252  983635 retry.go:31] will retry after 417.811337ms: waiting for machine to come up
	I0830 21:59:13.562790  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:13.563167  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:13.563196  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:13.563132  983635 retry.go:31] will retry after 724.812159ms: waiting for machine to come up
	I0830 21:59:14.290352  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:14.290782  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:14.290812  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:14.290716  983635 retry.go:31] will retry after 766.53118ms: waiting for machine to come up
	I0830 21:59:15.058633  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:15.058968  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:15.058996  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:15.058923  983635 retry.go:31] will retry after 801.670048ms: waiting for machine to come up
	I0830 21:59:15.862048  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:15.862421  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:15.862455  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:15.862381  983635 retry.go:31] will retry after 958.45191ms: waiting for machine to come up
	I0830 21:59:16.823104  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:16.823485  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:16.823524  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:16.823455  983635 retry.go:31] will retry after 1.312443641s: waiting for machine to come up
	I0830 21:59:18.138100  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:18.138498  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:18.138522  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:18.138468  983635 retry.go:31] will retry after 1.61015551s: waiting for machine to come up
	I0830 21:59:19.751232  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:19.751685  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:19.751716  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:19.751621  983635 retry.go:31] will retry after 1.823222026s: waiting for machine to come up
	I0830 21:59:21.576086  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:21.576444  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:21.576482  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:21.576397  983635 retry.go:31] will retry after 2.944256363s: waiting for machine to come up
	I0830 21:59:24.522604  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:24.522967  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:24.522997  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:24.522921  983635 retry.go:31] will retry after 3.202179564s: waiting for machine to come up
	I0830 21:59:27.727669  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:27.728045  983589 main.go:141] libmachine: (test-preload-229573) DBG | unable to find current IP address of domain test-preload-229573 in network mk-test-preload-229573
	I0830 21:59:27.728071  983589 main.go:141] libmachine: (test-preload-229573) DBG | I0830 21:59:27.727993  983635 retry.go:31] will retry after 4.666327099s: waiting for machine to come up
	I0830 21:59:32.396032  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.396575  983589 main.go:141] libmachine: (test-preload-229573) Found IP for machine: 192.168.39.128
	I0830 21:59:32.396630  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has current primary IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.396645  983589 main.go:141] libmachine: (test-preload-229573) Reserving static IP address...
	I0830 21:59:32.396979  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "test-preload-229573", mac: "52:54:00:3e:42:15", ip: "192.168.39.128"} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:32.397014  983589 main.go:141] libmachine: (test-preload-229573) Reserved static IP address: 192.168.39.128
	I0830 21:59:32.397031  983589 main.go:141] libmachine: (test-preload-229573) DBG | skip adding static IP to network mk-test-preload-229573 - found existing host DHCP lease matching {name: "test-preload-229573", mac: "52:54:00:3e:42:15", ip: "192.168.39.128"}
	I0830 21:59:32.397052  983589 main.go:141] libmachine: (test-preload-229573) DBG | Getting to WaitForSSH function...
	I0830 21:59:32.397070  983589 main.go:141] libmachine: (test-preload-229573) Waiting for SSH to be available...
	I0830 21:59:32.399260  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.399539  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:32.399564  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.399679  983589 main.go:141] libmachine: (test-preload-229573) DBG | Using SSH client type: external
	I0830 21:59:32.399709  983589 main.go:141] libmachine: (test-preload-229573) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/test-preload-229573/id_rsa (-rw-------)
	I0830 21:59:32.399733  983589 main.go:141] libmachine: (test-preload-229573) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/test-preload-229573/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 21:59:32.399762  983589 main.go:141] libmachine: (test-preload-229573) DBG | About to run SSH command:
	I0830 21:59:32.399815  983589 main.go:141] libmachine: (test-preload-229573) DBG | exit 0
	I0830 21:59:32.491514  983589 main.go:141] libmachine: (test-preload-229573) DBG | SSH cmd err, output: <nil>: 
	I0830 21:59:32.491983  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetConfigRaw
	I0830 21:59:32.492683  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetIP
	I0830 21:59:32.494964  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.495363  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:32.495399  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.495659  983589 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/config.json ...
	I0830 21:59:32.495891  983589 machine.go:88] provisioning docker machine ...
	I0830 21:59:32.495913  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 21:59:32.496101  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetMachineName
	I0830 21:59:32.496285  983589 buildroot.go:166] provisioning hostname "test-preload-229573"
	I0830 21:59:32.496312  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetMachineName
	I0830 21:59:32.496457  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 21:59:32.498231  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.498574  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:32.498600  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.498695  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHPort
	I0830 21:59:32.498865  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:32.499009  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:32.499135  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHUsername
	I0830 21:59:32.499287  983589 main.go:141] libmachine: Using SSH client type: native
	I0830 21:59:32.499697  983589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0830 21:59:32.499713  983589 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-229573 && echo "test-preload-229573" | sudo tee /etc/hostname
	I0830 21:59:32.636195  983589 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-229573
	
	I0830 21:59:32.636249  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 21:59:32.639072  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.639502  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:32.639538  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.639687  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHPort
	I0830 21:59:32.639922  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:32.640083  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:32.640210  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHUsername
	I0830 21:59:32.640351  983589 main.go:141] libmachine: Using SSH client type: native
	I0830 21:59:32.640780  983589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0830 21:59:32.640805  983589 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-229573' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-229573/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-229573' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:59:32.772386  983589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:59:32.772419  983589 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 21:59:32.772442  983589 buildroot.go:174] setting up certificates
	I0830 21:59:32.772462  983589 provision.go:83] configureAuth start
	I0830 21:59:32.772472  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetMachineName
	I0830 21:59:32.772773  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetIP
	I0830 21:59:32.775885  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.776251  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:32.776285  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.776405  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 21:59:32.778469  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.778760  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:32.778794  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.778889  983589 provision.go:138] copyHostCerts
	I0830 21:59:32.778950  983589 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 21:59:32.778969  983589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 21:59:32.779034  983589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 21:59:32.779142  983589 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 21:59:32.779156  983589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 21:59:32.779186  983589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 21:59:32.779246  983589 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 21:59:32.779253  983589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 21:59:32.779274  983589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 21:59:32.779356  983589 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.test-preload-229573 san=[192.168.39.128 192.168.39.128 localhost 127.0.0.1 minikube test-preload-229573]
	I0830 21:59:32.892222  983589 provision.go:172] copyRemoteCerts
	I0830 21:59:32.892280  983589 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:59:32.892306  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 21:59:32.894621  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.894921  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:32.894960  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:32.895087  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHPort
	I0830 21:59:32.895280  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:32.895452  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHUsername
	I0830 21:59:32.895590  983589 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/test-preload-229573/id_rsa Username:docker}
	I0830 21:59:32.988653  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0830 21:59:33.012116  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 21:59:33.034815  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 21:59:33.056226  983589 provision.go:86] duration metric: configureAuth took 283.752097ms
	I0830 21:59:33.056260  983589 buildroot.go:189] setting minikube options for container-runtime
	I0830 21:59:33.056486  983589 config.go:182] Loaded profile config "test-preload-229573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0830 21:59:33.056578  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 21:59:33.059118  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.059428  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:33.059471  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.059604  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHPort
	I0830 21:59:33.059841  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:33.060008  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:33.060134  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHUsername
	I0830 21:59:33.060278  983589 main.go:141] libmachine: Using SSH client type: native
	I0830 21:59:33.060714  983589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0830 21:59:33.060737  983589 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:59:33.361926  983589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:59:33.361959  983589 machine.go:91] provisioned docker machine in 866.052181ms
	I0830 21:59:33.361969  983589 start.go:300] post-start starting for "test-preload-229573" (driver="kvm2")
	I0830 21:59:33.361980  983589 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:59:33.361996  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 21:59:33.362310  983589 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:59:33.362344  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 21:59:33.364742  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.365140  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:33.365174  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.365353  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHPort
	I0830 21:59:33.365585  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:33.365772  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHUsername
	I0830 21:59:33.365916  983589 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/test-preload-229573/id_rsa Username:docker}
	I0830 21:59:33.457382  983589 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:59:33.461642  983589 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 21:59:33.461669  983589 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 21:59:33.461750  983589 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 21:59:33.461852  983589 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 21:59:33.461959  983589 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 21:59:33.470846  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:59:33.492938  983589 start.go:303] post-start completed in 130.953918ms
	I0830 21:59:33.492973  983589 fix.go:56] fixHost completed within 22.528300446s
	I0830 21:59:33.493003  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 21:59:33.495636  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.496066  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:33.496111  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.496254  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHPort
	I0830 21:59:33.496423  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:33.496622  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:33.496765  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHUsername
	I0830 21:59:33.496860  983589 main.go:141] libmachine: Using SSH client type: native
	I0830 21:59:33.497231  983589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0830 21:59:33.497242  983589 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 21:59:33.620416  983589 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693432773.567903142
	
	I0830 21:59:33.620439  983589 fix.go:206] guest clock: 1693432773.567903142
	I0830 21:59:33.620447  983589 fix.go:219] Guest: 2023-08-30 21:59:33.567903142 +0000 UTC Remote: 2023-08-30 21:59:33.492977975 +0000 UTC m=+27.114962912 (delta=74.925167ms)
	I0830 21:59:33.620478  983589 fix.go:190] guest clock delta is within tolerance: 74.925167ms
	I0830 21:59:33.620483  983589 start.go:83] releasing machines lock for "test-preload-229573", held for 22.655822983s
	I0830 21:59:33.620506  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 21:59:33.620803  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetIP
	I0830 21:59:33.623334  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.623787  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:33.623821  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.623964  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 21:59:33.624445  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 21:59:33.624646  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 21:59:33.624773  983589 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:59:33.624818  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 21:59:33.624879  983589 ssh_runner.go:195] Run: cat /version.json
	I0830 21:59:33.624911  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 21:59:33.627402  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.627430  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.627843  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:33.627874  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.627905  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:33.627923  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:33.628010  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHPort
	I0830 21:59:33.628115  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHPort
	I0830 21:59:33.628205  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:33.628280  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 21:59:33.628351  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHUsername
	I0830 21:59:33.628436  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHUsername
	I0830 21:59:33.628515  983589 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/test-preload-229573/id_rsa Username:docker}
	I0830 21:59:33.628547  983589 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/test-preload-229573/id_rsa Username:docker}
	I0830 21:59:33.742107  983589 ssh_runner.go:195] Run: systemctl --version
	I0830 21:59:33.747845  983589 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:59:33.886671  983589 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 21:59:33.893188  983589 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 21:59:33.893254  983589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:59:33.909339  983589 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 21:59:33.909358  983589 start.go:466] detecting cgroup driver to use...
	I0830 21:59:33.909408  983589 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:59:33.922144  983589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:59:33.933364  983589 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:59:33.933414  983589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:59:33.946602  983589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:59:33.957890  983589 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:59:34.058520  983589 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:59:34.177797  983589 docker.go:212] disabling docker service ...
	I0830 21:59:34.177865  983589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:59:34.189930  983589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:59:34.200721  983589 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:59:34.306686  983589 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:59:34.430207  983589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:59:34.442763  983589 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:59:34.459697  983589 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0830 21:59:34.459764  983589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:59:34.468791  983589 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:59:34.468858  983589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:59:34.477813  983589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:59:34.486869  983589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:59:34.495702  983589 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:59:34.504927  983589 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:59:34.512827  983589 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 21:59:34.512891  983589 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 21:59:34.525187  983589 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:59:34.533630  983589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:59:34.649278  983589 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:59:34.814084  983589 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:59:34.814170  983589 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:59:34.822297  983589 start.go:534] Will wait 60s for crictl version
	I0830 21:59:34.822361  983589 ssh_runner.go:195] Run: which crictl
	I0830 21:59:34.826349  983589 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:59:34.860979  983589 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 21:59:34.861064  983589 ssh_runner.go:195] Run: crio --version
	I0830 21:59:34.907948  983589 ssh_runner.go:195] Run: crio --version
	I0830 21:59:34.963365  983589 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I0830 21:59:34.964854  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetIP
	I0830 21:59:34.967649  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:34.968056  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 21:59:34.968083  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 21:59:34.968257  983589 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 21:59:34.972673  983589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:59:34.985551  983589 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0830 21:59:34.985605  983589 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:59:35.015763  983589 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0830 21:59:35.015841  983589 ssh_runner.go:195] Run: which lz4
	I0830 21:59:35.019908  983589 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 21:59:35.024066  983589 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 21:59:35.024105  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0830 21:59:36.887327  983589 crio.go:444] Took 1.867441 seconds to copy over tarball
	I0830 21:59:36.887400  983589 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 21:59:39.753253  983589 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.865820101s)
	I0830 21:59:39.753291  983589 crio.go:451] Took 2.865929 seconds to extract the tarball
	I0830 21:59:39.753305  983589 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 21:59:39.793913  983589 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:59:39.836610  983589 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0830 21:59:39.836652  983589 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 21:59:39.836750  983589 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:59:39.836755  983589 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0830 21:59:39.836775  983589 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0830 21:59:39.836800  983589 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0830 21:59:39.836821  983589 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0830 21:59:39.836900  983589 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0830 21:59:39.836955  983589 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0830 21:59:39.836952  983589 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0830 21:59:39.838288  983589 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0830 21:59:39.838303  983589 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0830 21:59:39.838303  983589 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:59:39.838316  983589 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0830 21:59:39.838289  983589 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0830 21:59:39.838292  983589 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0830 21:59:39.838288  983589 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0830 21:59:39.838294  983589 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0830 21:59:40.005000  983589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0830 21:59:40.010764  983589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0830 21:59:40.012581  983589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0830 21:59:40.015509  983589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0830 21:59:40.021477  983589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0830 21:59:40.021654  983589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0830 21:59:40.022127  983589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0830 21:59:40.081146  983589 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0830 21:59:40.081199  983589 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0830 21:59:40.081252  983589 ssh_runner.go:195] Run: which crictl
	I0830 21:59:40.136507  983589 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:59:40.195690  983589 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0830 21:59:40.195737  983589 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0830 21:59:40.195796  983589 ssh_runner.go:195] Run: which crictl
	I0830 21:59:40.216585  983589 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0830 21:59:40.216636  983589 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0830 21:59:40.216733  983589 ssh_runner.go:195] Run: which crictl
	I0830 21:59:40.224924  983589 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0830 21:59:40.224976  983589 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0830 21:59:40.225026  983589 ssh_runner.go:195] Run: which crictl
	I0830 21:59:40.235015  983589 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0830 21:59:40.235039  983589 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0830 21:59:40.235066  983589 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0830 21:59:40.235071  983589 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0830 21:59:40.235117  983589 ssh_runner.go:195] Run: which crictl
	I0830 21:59:40.235117  983589 ssh_runner.go:195] Run: which crictl
	I0830 21:59:40.235173  983589 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0830 21:59:40.235213  983589 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0830 21:59:40.235236  983589 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0830 21:59:40.235244  983589 ssh_runner.go:195] Run: which crictl
	I0830 21:59:40.376414  983589 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0830 21:59:40.376462  983589 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0830 21:59:40.376517  983589 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0830 21:59:40.376519  983589 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0830 21:59:40.376586  983589 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0830 21:59:40.376718  983589 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0830 21:59:40.376805  983589 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0830 21:59:40.376812  983589 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0830 21:59:40.442275  983589 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0830 21:59:40.442402  983589 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0830 21:59:40.474595  983589 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0830 21:59:40.474673  983589 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0830 21:59:40.474710  983589 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0830 21:59:40.474712  983589 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0830 21:59:40.474764  983589 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0830 21:59:40.474781  983589 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0830 21:59:40.487709  983589 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0830 21:59:40.487764  983589 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0830 21:59:40.487719  983589 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0830 21:59:40.487819  983589 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0830 21:59:40.487839  983589 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0830 21:59:40.487845  983589 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0830 21:59:40.487856  983589 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0830 21:59:40.487884  983589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0830 21:59:40.489763  983589 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0830 21:59:40.489786  983589 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0830 21:59:40.489809  983589 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0830 21:59:42.942770  983589 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.454926976s)
	I0830 21:59:42.942799  983589 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.454894184s)
	I0830 21:59:42.942808  983589 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0830 21:59:42.942822  983589 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0830 21:59:42.942823  983589 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (2.454959171s)
	I0830 21:59:42.942848  983589 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0830 21:59:42.942849  983589 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0830 21:59:42.942909  983589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0830 21:59:43.695487  983589 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0830 21:59:43.695540  983589 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0830 21:59:43.695593  983589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0830 21:59:44.539899  983589 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0830 21:59:44.539939  983589 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0830 21:59:44.540012  983589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0830 21:59:44.982585  983589 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0830 21:59:44.982630  983589 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0830 21:59:44.982679  983589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0830 21:59:45.423857  983589 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0830 21:59:45.423908  983589 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0830 21:59:45.423952  983589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0830 21:59:46.169872  983589 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0830 21:59:46.169932  983589 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I0830 21:59:46.170005  983589 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0830 21:59:46.313087  983589 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0830 21:59:46.313147  983589 cache_images.go:123] Successfully loaded all cached images
	I0830 21:59:46.313153  983589 cache_images.go:92] LoadImages completed in 6.476487036s
	I0830 21:59:46.313268  983589 ssh_runner.go:195] Run: crio config
	I0830 21:59:46.367807  983589 cni.go:84] Creating CNI manager for ""
	I0830 21:59:46.367830  983589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 21:59:46.367856  983589 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:59:46.367881  983589 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-229573 NodeName:test-preload-229573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 21:59:46.368038  983589 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-229573"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:59:46.368121  983589 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-229573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-229573 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 21:59:46.368187  983589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0830 21:59:46.377379  983589 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 21:59:46.377458  983589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 21:59:46.385990  983589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0830 21:59:46.401245  983589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 21:59:46.416365  983589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0830 21:59:46.432525  983589 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I0830 21:59:46.436197  983589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:59:46.447532  983589 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573 for IP: 192.168.39.128
	I0830 21:59:46.447574  983589 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:59:46.447754  983589 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 21:59:46.447836  983589 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 21:59:46.447922  983589 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/client.key
	I0830 21:59:46.448000  983589 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/apiserver.key.78d22da8
	I0830 21:59:46.448055  983589 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/proxy-client.key
	I0830 21:59:46.448205  983589 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 21:59:46.448246  983589 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 21:59:46.448262  983589 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 21:59:46.448301  983589 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 21:59:46.448335  983589 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:59:46.448376  983589 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 21:59:46.448432  983589 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 21:59:46.449102  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 21:59:46.471743  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0830 21:59:46.493781  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 21:59:46.515529  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 21:59:46.536588  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:59:46.557636  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:59:46.579104  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:59:46.599832  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 21:59:46.620933  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 21:59:46.642270  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:59:46.664438  983589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 21:59:46.686152  983589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 21:59:46.702160  983589 ssh_runner.go:195] Run: openssl version
	I0830 21:59:46.707501  983589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:59:46.717687  983589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:59:46.722267  983589 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:59:46.722321  983589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:59:46.727598  983589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:59:46.738241  983589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 21:59:46.748768  983589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 21:59:46.753369  983589 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 21:59:46.753445  983589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 21:59:46.758711  983589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 21:59:46.768786  983589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 21:59:46.779031  983589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 21:59:46.783333  983589 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 21:59:46.783369  983589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 21:59:46.788644  983589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 21:59:46.798332  983589 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:59:46.802483  983589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 21:59:46.808090  983589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 21:59:46.814199  983589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 21:59:46.819660  983589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 21:59:46.825109  983589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 21:59:46.830340  983589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 21:59:46.835658  983589 kubeadm.go:404] StartCluster: {Name:test-preload-229573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-229573 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:59:46.835796  983589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 21:59:46.835847  983589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:59:46.863863  983589 cri.go:89] found id: ""
	I0830 21:59:46.863949  983589 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 21:59:46.873821  983589 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 21:59:46.873848  983589 kubeadm.go:636] restartCluster start
	I0830 21:59:46.873908  983589 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 21:59:46.883319  983589 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:46.883822  983589 kubeconfig.go:135] verify returned: extract IP: "test-preload-229573" does not appear in /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:59:46.883983  983589 kubeconfig.go:146] "test-preload-229573" context is missing from /home/jenkins/minikube-integration/17114-955377/kubeconfig - will repair!
	I0830 21:59:46.884253  983589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:59:46.884906  983589 kapi.go:59] client config for test-preload-229573: &rest.Config{Host:"https://192.168.39.128:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:59:46.885846  983589 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 21:59:46.894977  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:46.895056  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:46.906520  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:46.906544  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:46.906616  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:46.917801  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:47.418565  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:47.418660  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:47.430680  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:47.918209  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:47.918302  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:47.929931  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:48.418581  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:48.418666  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:48.430679  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:48.918190  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:48.918317  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:48.930372  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:49.417897  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:49.417985  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:49.429778  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:49.918558  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:49.918639  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:49.930641  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:50.418267  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:50.418374  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:50.430666  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:50.918228  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:50.918336  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:50.929529  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:51.418725  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:51.418835  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:51.430563  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:51.918015  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:51.918117  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:51.931240  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:52.417889  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:52.417993  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:52.430740  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:52.918241  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:52.918339  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:52.931344  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:53.417912  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:53.418026  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:53.429392  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:53.917931  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:53.918037  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:53.930333  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:54.418947  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:54.419025  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:54.431274  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:54.918881  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:54.918975  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:54.932432  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:55.418119  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:55.418243  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:55.429000  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:55.918588  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:55.918676  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:55.929718  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:56.417933  983589 api_server.go:166] Checking apiserver status ...
	I0830 21:59:56.418022  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 21:59:56.429302  983589 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:59:56.895959  983589 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 21:59:56.895990  983589 kubeadm.go:1128] stopping kube-system containers ...
	I0830 21:59:56.896016  983589 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 21:59:56.896089  983589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:59:56.925407  983589 cri.go:89] found id: ""
	I0830 21:59:56.925475  983589 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 21:59:56.939460  983589 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 21:59:56.947676  983589 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 21:59:56.947733  983589 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 21:59:56.955857  983589 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 21:59:56.955879  983589 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:59:57.065742  983589 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:59:57.767309  983589 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:59:58.079401  983589 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:59:58.150228  983589 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:59:58.225585  983589 api_server.go:52] waiting for apiserver process to appear ...
	I0830 21:59:58.225670  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:59:58.242409  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:59:58.752994  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:59:59.253446  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:59:59.753071  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:00:00.252980  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:00:00.290353  983589 api_server.go:72] duration metric: took 2.06476491s to wait for apiserver process to appear ...
	I0830 22:00:00.290386  983589 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:00:00.290409  983589 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0830 22:00:00.291052  983589 api_server.go:269] stopped: https://192.168.39.128:8443/healthz: Get "https://192.168.39.128:8443/healthz": dial tcp 192.168.39.128:8443: connect: connection refused
	I0830 22:00:00.291101  983589 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0830 22:00:00.291531  983589 api_server.go:269] stopped: https://192.168.39.128:8443/healthz: Get "https://192.168.39.128:8443/healthz": dial tcp 192.168.39.128:8443: connect: connection refused
	I0830 22:00:00.792300  983589 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0830 22:00:05.638047  983589 api_server.go:279] https://192.168.39.128:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:00:05.638087  983589 api_server.go:103] status: https://192.168.39.128:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:00:05.638100  983589 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0830 22:00:05.646655  983589 api_server.go:279] https://192.168.39.128:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:00:05.646685  983589 api_server.go:103] status: https://192.168.39.128:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:00:05.792010  983589 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0830 22:00:05.819017  983589 api_server.go:279] https://192.168.39.128:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0830 22:00:05.819059  983589 api_server.go:103] status: https://192.168.39.128:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0830 22:00:06.292023  983589 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0830 22:00:06.300420  983589 api_server.go:279] https://192.168.39.128:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0830 22:00:06.300446  983589 api_server.go:103] status: https://192.168.39.128:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0830 22:00:06.791632  983589 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0830 22:00:06.797388  983589 api_server.go:279] https://192.168.39.128:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0830 22:00:06.797412  983589 api_server.go:103] status: https://192.168.39.128:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0830 22:00:07.291854  983589 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0830 22:00:07.298207  983589 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0830 22:00:07.307846  983589 api_server.go:141] control plane version: v1.24.4
	I0830 22:00:07.307872  983589 api_server.go:131] duration metric: took 7.017479664s to wait for apiserver health ...
	I0830 22:00:07.307881  983589 cni.go:84] Creating CNI manager for ""
	I0830 22:00:07.307894  983589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:00:07.309522  983589 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:00:07.310936  983589 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:00:07.324109  983589 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:00:07.350629  983589 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:00:07.364730  983589 system_pods.go:59] 7 kube-system pods found
	I0830 22:00:07.364766  983589 system_pods.go:61] "coredns-6d4b75cb6d-9qkv2" [34ebcb0e-20c1-4273-b96d-23986a3ca37b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:00:07.364782  983589 system_pods.go:61] "etcd-test-preload-229573" [efef5e0a-7eb1-4a7d-ab90-5e84818bf4f6] Running
	I0830 22:00:07.364789  983589 system_pods.go:61] "kube-apiserver-test-preload-229573" [f4218e19-2b44-4b37-b086-bebe46b70ad2] Running
	I0830 22:00:07.364801  983589 system_pods.go:61] "kube-controller-manager-test-preload-229573" [c7e62d18-b8df-435c-8a08-9ce23e96771f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:00:07.364810  983589 system_pods.go:61] "kube-proxy-ss8jg" [4e9c7421-aa35-4b98-a722-2c2cbb2fff45] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:00:07.364817  983589 system_pods.go:61] "kube-scheduler-test-preload-229573" [acddf2d0-50f2-4cde-a216-9571031af7fe] Running
	I0830 22:00:07.364831  983589 system_pods.go:61] "storage-provisioner" [bfc5d77b-babb-4038-ad77-a226f68bf053] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:00:07.364840  983589 system_pods.go:74] duration metric: took 14.189787ms to wait for pod list to return data ...
	I0830 22:00:07.364850  983589 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:00:07.372712  983589 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:00:07.372742  983589 node_conditions.go:123] node cpu capacity is 2
	I0830 22:00:07.372756  983589 node_conditions.go:105] duration metric: took 7.901198ms to run NodePressure ...
	I0830 22:00:07.372785  983589 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:00:07.640914  983589 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:00:07.652328  983589 kubeadm.go:787] kubelet initialised
	I0830 22:00:07.652358  983589 kubeadm.go:788] duration metric: took 11.406635ms waiting for restarted kubelet to initialise ...
	I0830 22:00:07.652368  983589 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:00:07.702444  983589 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9qkv2" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:07.711754  983589 pod_ready.go:97] node "test-preload-229573" hosting pod "coredns-6d4b75cb6d-9qkv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:07.711801  983589 pod_ready.go:81] duration metric: took 9.329889ms waiting for pod "coredns-6d4b75cb6d-9qkv2" in "kube-system" namespace to be "Ready" ...
	E0830 22:00:07.711814  983589 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-229573" hosting pod "coredns-6d4b75cb6d-9qkv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:07.711836  983589 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:07.722363  983589 pod_ready.go:97] node "test-preload-229573" hosting pod "etcd-test-preload-229573" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:07.722388  983589 pod_ready.go:81] duration metric: took 10.543661ms waiting for pod "etcd-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	E0830 22:00:07.722395  983589 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-229573" hosting pod "etcd-test-preload-229573" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:07.722404  983589 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:07.733478  983589 pod_ready.go:97] node "test-preload-229573" hosting pod "kube-apiserver-test-preload-229573" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:07.733504  983589 pod_ready.go:81] duration metric: took 11.094492ms waiting for pod "kube-apiserver-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	E0830 22:00:07.733512  983589 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-229573" hosting pod "kube-apiserver-test-preload-229573" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:07.733522  983589 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:07.755591  983589 pod_ready.go:97] node "test-preload-229573" hosting pod "kube-controller-manager-test-preload-229573" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:07.755632  983589 pod_ready.go:81] duration metric: took 22.100902ms waiting for pod "kube-controller-manager-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	E0830 22:00:07.755642  983589 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-229573" hosting pod "kube-controller-manager-test-preload-229573" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:07.755660  983589 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ss8jg" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:08.155481  983589 pod_ready.go:97] node "test-preload-229573" hosting pod "kube-proxy-ss8jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:08.155515  983589 pod_ready.go:81] duration metric: took 399.846554ms waiting for pod "kube-proxy-ss8jg" in "kube-system" namespace to be "Ready" ...
	E0830 22:00:08.155525  983589 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-229573" hosting pod "kube-proxy-ss8jg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:08.155533  983589 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:08.555081  983589 pod_ready.go:97] node "test-preload-229573" hosting pod "kube-scheduler-test-preload-229573" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:08.555106  983589 pod_ready.go:81] duration metric: took 399.566402ms waiting for pod "kube-scheduler-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	E0830 22:00:08.555113  983589 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-229573" hosting pod "kube-scheduler-test-preload-229573" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:08.555123  983589 pod_ready.go:38] duration metric: took 902.741881ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:00:08.555144  983589 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:00:08.566251  983589 ops.go:34] apiserver oom_adj: -16
	I0830 22:00:08.566272  983589 kubeadm.go:640] restartCluster took 21.692418888s
	I0830 22:00:08.566279  983589 kubeadm.go:406] StartCluster complete in 21.730628794s
	I0830 22:00:08.566294  983589 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:00:08.566372  983589 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:00:08.567057  983589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:00:08.567273  983589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:00:08.567467  983589 config.go:182] Loaded profile config "test-preload-229573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0830 22:00:08.567406  983589 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:00:08.567539  983589 addons.go:69] Setting storage-provisioner=true in profile "test-preload-229573"
	I0830 22:00:08.567560  983589 addons.go:231] Setting addon storage-provisioner=true in "test-preload-229573"
	W0830 22:00:08.567569  983589 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:00:08.567540  983589 addons.go:69] Setting default-storageclass=true in profile "test-preload-229573"
	I0830 22:00:08.567661  983589 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-229573"
	I0830 22:00:08.567633  983589 host.go:66] Checking if "test-preload-229573" exists ...
	I0830 22:00:08.567864  983589 kapi.go:59] client config for test-preload-229573: &rest.Config{Host:"https://192.168.39.128:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 22:00:08.568115  983589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:00:08.568159  983589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:00:08.568190  983589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:00:08.568214  983589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:00:08.571694  983589 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-229573" context rescaled to 1 replicas
	I0830 22:00:08.571728  983589 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:00:08.573807  983589 out.go:177] * Verifying Kubernetes components...
	I0830 22:00:08.575161  983589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:00:08.583722  983589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0830 22:00:08.583724  983589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0830 22:00:08.584236  983589 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:00:08.584242  983589 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:00:08.584868  983589 main.go:141] libmachine: Using API Version  1
	I0830 22:00:08.584888  983589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:00:08.584981  983589 main.go:141] libmachine: Using API Version  1
	I0830 22:00:08.584995  983589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:00:08.585269  983589 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:00:08.585392  983589 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:00:08.585568  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetState
	I0830 22:00:08.585898  983589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:00:08.585959  983589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:00:08.588319  983589 kapi.go:59] client config for test-preload-229573: &rest.Config{Host:"https://192.168.39.128:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/profiles/test-preload-229573/client.key", CAFile:"/home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63da0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 22:00:08.599836  983589 addons.go:231] Setting addon default-storageclass=true in "test-preload-229573"
	W0830 22:00:08.599863  983589 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:00:08.599897  983589 host.go:66] Checking if "test-preload-229573" exists ...
	I0830 22:00:08.600316  983589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:00:08.600373  983589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:00:08.601948  983589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0830 22:00:08.602400  983589 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:00:08.603028  983589 main.go:141] libmachine: Using API Version  1
	I0830 22:00:08.603060  983589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:00:08.603394  983589 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:00:08.603597  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetState
	I0830 22:00:08.605376  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 22:00:08.607738  983589 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:00:08.609290  983589 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:00:08.609320  983589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:00:08.609346  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 22:00:08.612721  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 22:00:08.613193  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 22:00:08.613223  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 22:00:08.613384  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHPort
	I0830 22:00:08.613600  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 22:00:08.613823  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHUsername
	I0830 22:00:08.613994  983589 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/test-preload-229573/id_rsa Username:docker}
	I0830 22:00:08.617741  983589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I0830 22:00:08.618175  983589 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:00:08.618741  983589 main.go:141] libmachine: Using API Version  1
	I0830 22:00:08.618765  983589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:00:08.619132  983589 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:00:08.619591  983589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:00:08.619632  983589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:00:08.635696  983589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I0830 22:00:08.636162  983589 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:00:08.636721  983589 main.go:141] libmachine: Using API Version  1
	I0830 22:00:08.636746  983589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:00:08.637131  983589 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:00:08.637339  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetState
	I0830 22:00:08.639003  983589 main.go:141] libmachine: (test-preload-229573) Calling .DriverName
	I0830 22:00:08.639264  983589 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:00:08.639282  983589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:00:08.639304  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHHostname
	I0830 22:00:08.642230  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 22:00:08.642655  983589 main.go:141] libmachine: (test-preload-229573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:42:15", ip: ""} in network mk-test-preload-229573: {Iface:virbr1 ExpiryTime:2023-08-30 22:57:41 +0000 UTC Type:0 Mac:52:54:00:3e:42:15 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-229573 Clientid:01:52:54:00:3e:42:15}
	I0830 22:00:08.642693  983589 main.go:141] libmachine: (test-preload-229573) DBG | domain test-preload-229573 has defined IP address 192.168.39.128 and MAC address 52:54:00:3e:42:15 in network mk-test-preload-229573
	I0830 22:00:08.642897  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHPort
	I0830 22:00:08.643144  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHKeyPath
	I0830 22:00:08.643332  983589 main.go:141] libmachine: (test-preload-229573) Calling .GetSSHUsername
	I0830 22:00:08.643459  983589 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/test-preload-229573/id_rsa Username:docker}
	I0830 22:00:08.762875  983589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:00:08.771840  983589 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0830 22:00:08.771840  983589 node_ready.go:35] waiting up to 6m0s for node "test-preload-229573" to be "Ready" ...
	I0830 22:00:08.785005  983589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:00:09.684719  983589 main.go:141] libmachine: Making call to close driver server
	I0830 22:00:09.684741  983589 main.go:141] libmachine: Making call to close driver server
	I0830 22:00:09.684753  983589 main.go:141] libmachine: (test-preload-229573) Calling .Close
	I0830 22:00:09.684792  983589 main.go:141] libmachine: (test-preload-229573) Calling .Close
	I0830 22:00:09.685078  983589 main.go:141] libmachine: (test-preload-229573) DBG | Closing plugin on server side
	I0830 22:00:09.685120  983589 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:00:09.685129  983589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:00:09.685151  983589 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:00:09.685162  983589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:00:09.685171  983589 main.go:141] libmachine: Making call to close driver server
	I0830 22:00:09.685181  983589 main.go:141] libmachine: (test-preload-229573) Calling .Close
	I0830 22:00:09.685183  983589 main.go:141] libmachine: Making call to close driver server
	I0830 22:00:09.685200  983589 main.go:141] libmachine: (test-preload-229573) Calling .Close
	I0830 22:00:09.685418  983589 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:00:09.685435  983589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:00:09.685434  983589 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:00:09.685447  983589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:00:09.685447  983589 main.go:141] libmachine: Making call to close driver server
	I0830 22:00:09.685544  983589 main.go:141] libmachine: (test-preload-229573) Calling .Close
	I0830 22:00:09.685748  983589 main.go:141] libmachine: (test-preload-229573) DBG | Closing plugin on server side
	I0830 22:00:09.685780  983589 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:00:09.685789  983589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:00:09.689039  983589 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0830 22:00:09.690285  983589 addons.go:502] enable addons completed in 1.122899455s: enabled=[storage-provisioner default-storageclass]
	I0830 22:00:10.959483  983589 node_ready.go:58] node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:13.460931  983589 node_ready.go:58] node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:15.958478  983589 node_ready.go:58] node "test-preload-229573" has status "Ready":"False"
	I0830 22:00:16.459244  983589 node_ready.go:49] node "test-preload-229573" has status "Ready":"True"
	I0830 22:00:16.459268  983589 node_ready.go:38] duration metric: took 7.687397263s waiting for node "test-preload-229573" to be "Ready" ...
	I0830 22:00:16.459277  983589 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:00:16.464451  983589 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9qkv2" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:16.469623  983589 pod_ready.go:92] pod "coredns-6d4b75cb6d-9qkv2" in "kube-system" namespace has status "Ready":"True"
	I0830 22:00:16.469641  983589 pod_ready.go:81] duration metric: took 5.166292ms waiting for pod "coredns-6d4b75cb6d-9qkv2" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:16.469655  983589 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:16.474183  983589 pod_ready.go:92] pod "etcd-test-preload-229573" in "kube-system" namespace has status "Ready":"True"
	I0830 22:00:16.474201  983589 pod_ready.go:81] duration metric: took 4.541663ms waiting for pod "etcd-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:16.474209  983589 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:16.478396  983589 pod_ready.go:92] pod "kube-apiserver-test-preload-229573" in "kube-system" namespace has status "Ready":"True"
	I0830 22:00:16.478413  983589 pod_ready.go:81] duration metric: took 4.199474ms waiting for pod "kube-apiserver-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:16.478423  983589 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:18.867160  983589 pod_ready.go:102] pod "kube-controller-manager-test-preload-229573" in "kube-system" namespace has status "Ready":"False"
	I0830 22:00:19.867405  983589 pod_ready.go:92] pod "kube-controller-manager-test-preload-229573" in "kube-system" namespace has status "Ready":"True"
	I0830 22:00:19.867448  983589 pod_ready.go:81] duration metric: took 3.389016538s waiting for pod "kube-controller-manager-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:19.867465  983589 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ss8jg" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:19.873606  983589 pod_ready.go:92] pod "kube-proxy-ss8jg" in "kube-system" namespace has status "Ready":"True"
	I0830 22:00:19.873625  983589 pod_ready.go:81] duration metric: took 6.15086ms waiting for pod "kube-proxy-ss8jg" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:19.873635  983589 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:20.059902  983589 pod_ready.go:92] pod "kube-scheduler-test-preload-229573" in "kube-system" namespace has status "Ready":"True"
	I0830 22:00:20.059933  983589 pod_ready.go:81] duration metric: took 186.290494ms waiting for pod "kube-scheduler-test-preload-229573" in "kube-system" namespace to be "Ready" ...
	I0830 22:00:20.059949  983589 pod_ready.go:38] duration metric: took 3.60066393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:00:20.059974  983589 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:00:20.060034  983589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:00:20.073778  983589 api_server.go:72] duration metric: took 11.502022072s to wait for apiserver process to appear ...
	I0830 22:00:20.073808  983589 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:00:20.073827  983589 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0830 22:00:20.079336  983589 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0830 22:00:20.080409  983589 api_server.go:141] control plane version: v1.24.4
	I0830 22:00:20.080426  983589 api_server.go:131] duration metric: took 6.611893ms to wait for apiserver health ...
	I0830 22:00:20.080433  983589 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:00:20.262438  983589 system_pods.go:59] 7 kube-system pods found
	I0830 22:00:20.262516  983589 system_pods.go:61] "coredns-6d4b75cb6d-9qkv2" [34ebcb0e-20c1-4273-b96d-23986a3ca37b] Running
	I0830 22:00:20.262540  983589 system_pods.go:61] "etcd-test-preload-229573" [efef5e0a-7eb1-4a7d-ab90-5e84818bf4f6] Running
	I0830 22:00:20.262548  983589 system_pods.go:61] "kube-apiserver-test-preload-229573" [f4218e19-2b44-4b37-b086-bebe46b70ad2] Running
	I0830 22:00:20.262553  983589 system_pods.go:61] "kube-controller-manager-test-preload-229573" [c7e62d18-b8df-435c-8a08-9ce23e96771f] Running
	I0830 22:00:20.262559  983589 system_pods.go:61] "kube-proxy-ss8jg" [4e9c7421-aa35-4b98-a722-2c2cbb2fff45] Running
	I0830 22:00:20.262564  983589 system_pods.go:61] "kube-scheduler-test-preload-229573" [acddf2d0-50f2-4cde-a216-9571031af7fe] Running
	I0830 22:00:20.262570  983589 system_pods.go:61] "storage-provisioner" [bfc5d77b-babb-4038-ad77-a226f68bf053] Running
	I0830 22:00:20.262577  983589 system_pods.go:74] duration metric: took 182.138537ms to wait for pod list to return data ...
	I0830 22:00:20.262590  983589 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:00:20.458955  983589 default_sa.go:45] found service account: "default"
	I0830 22:00:20.458997  983589 default_sa.go:55] duration metric: took 196.389216ms for default service account to be created ...
	I0830 22:00:20.459009  983589 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:00:20.662771  983589 system_pods.go:86] 7 kube-system pods found
	I0830 22:00:20.662804  983589 system_pods.go:89] "coredns-6d4b75cb6d-9qkv2" [34ebcb0e-20c1-4273-b96d-23986a3ca37b] Running
	I0830 22:00:20.662809  983589 system_pods.go:89] "etcd-test-preload-229573" [efef5e0a-7eb1-4a7d-ab90-5e84818bf4f6] Running
	I0830 22:00:20.662813  983589 system_pods.go:89] "kube-apiserver-test-preload-229573" [f4218e19-2b44-4b37-b086-bebe46b70ad2] Running
	I0830 22:00:20.662818  983589 system_pods.go:89] "kube-controller-manager-test-preload-229573" [c7e62d18-b8df-435c-8a08-9ce23e96771f] Running
	I0830 22:00:20.662826  983589 system_pods.go:89] "kube-proxy-ss8jg" [4e9c7421-aa35-4b98-a722-2c2cbb2fff45] Running
	I0830 22:00:20.662830  983589 system_pods.go:89] "kube-scheduler-test-preload-229573" [acddf2d0-50f2-4cde-a216-9571031af7fe] Running
	I0830 22:00:20.662835  983589 system_pods.go:89] "storage-provisioner" [bfc5d77b-babb-4038-ad77-a226f68bf053] Running
	I0830 22:00:20.662842  983589 system_pods.go:126] duration metric: took 203.827376ms to wait for k8s-apps to be running ...
	I0830 22:00:20.662848  983589 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:00:20.662900  983589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:00:20.677437  983589 system_svc.go:56] duration metric: took 14.577481ms WaitForService to wait for kubelet.
	I0830 22:00:20.677464  983589 kubeadm.go:581] duration metric: took 12.105712683s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:00:20.677482  983589 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:00:20.860494  983589 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:00:20.860526  983589 node_conditions.go:123] node cpu capacity is 2
	I0830 22:00:20.860537  983589 node_conditions.go:105] duration metric: took 183.050056ms to run NodePressure ...
	I0830 22:00:20.860547  983589 start.go:228] waiting for startup goroutines ...
	I0830 22:00:20.860553  983589 start.go:233] waiting for cluster config update ...
	I0830 22:00:20.860562  983589 start.go:242] writing updated cluster config ...
	I0830 22:00:20.860860  983589 ssh_runner.go:195] Run: rm -f paused
	I0830 22:00:20.908893  983589 start.go:600] kubectl: 1.28.1, cluster: 1.24.4 (minor skew: 4)
	I0830 22:00:20.911247  983589 out.go:177] 
	W0830 22:00:20.912744  983589 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0830 22:00:20.914212  983589 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0830 22:00:20.915539  983589 out.go:177] * Done! kubectl is now configured to use "test-preload-229573" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 21:59:22 UTC, ends at Wed 2023-08-30 22:00:21 UTC. --
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.430776801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf72e9717e10c4519a2f2b1115b85bc48fd04ac70586cddd2e804501addcd3a3,PodSandboxId:9ee833329a230db4366d37e927b195b20b1b2b142fb3650355cd8e4b332aff30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1693432810761019031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9qkv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ebcb0e-20c1-4273-b96d-23986a3ca37b,},Annotations:map[string]string{io.kubernetes.container.hash: a45a37d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9324d341344de77c4137f8e21af9c44c4c24536ebb71b25d265eb0d72f101a,PodSandboxId:7d8fae49a32cf74e1170d02874e88a4a8b4cb56e55445668a9ecc12b97bd0290,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693432808197771457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bfc5d77b-babb-4038-ad77-a226f68bf053,},Annotations:map[string]string{io.kubernetes.container.hash: 8ad48639,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16218f8a9fbeeaa2e03ff77c8fdfe188367f13f92d10be6d61cba395f682655c,PodSandboxId:51daa1e6df95e43a4d99332a8c24515fb190107d41ae0decbc417eb7b967fb00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1693432807681617358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ss8jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e9c7421-aa35-4b98-a722-2c2cbb2fff45,},Annotations:map[string]string{io.kubernetes.container.hash: ecb57cba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bc3c9cbec5c39551f4497cba83297f9164041fcab8bf597bcc62864e0b7ae3,PodSandboxId:f39757e5869ecb0e6731c372fc7d3ea6d2e2173a531c0fc529d4e50c134f6ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1693432799689262889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250f1691fd
0e14109f4cfaacd997d996,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fc44830dee77364bebeec54ce3c59ce4677bfafa24cfedfa989c0e8f3a32f2,PodSandboxId:afb8d08a69a1212dec225815304840d0e5421065b450bb8c145be44d15feb96e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1693432799616562945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0519d2d59566f7da35b38f394bc12ea3,},Annotations:map[string]string
{io.kubernetes.container.hash: cfe3380d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3d3c98f3dcb503226a1a2bffd9bef1e80d0e255d3df52fb3e562f81093a9a7,PodSandboxId:ff7223c8e7640e374529d0a6eb4e471928436b09fc47ab15f739f12ecd1de221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1693432799480629841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0af60cf64b7e991f2659fe20071e2d6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272d09a6f1909b072fd0d2cd6fab3007127bc936af9fde075313217e753c525d,PodSandboxId:aed321f59aa9dbdd539dd277379a87c07a693202359a4539e0c2cdc6a80c8c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1693432799460375982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100b032803cfef2a834020218e3187db,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fa55c32d-af4b-4b97-bab6-e64fa6b4befc name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.688287867Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=205db40b-1662-4a0f-9094-4a5f6216acc5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.688352453Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=205db40b-1662-4a0f-9094-4a5f6216acc5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.688602822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf72e9717e10c4519a2f2b1115b85bc48fd04ac70586cddd2e804501addcd3a3,PodSandboxId:9ee833329a230db4366d37e927b195b20b1b2b142fb3650355cd8e4b332aff30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1693432810761019031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9qkv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ebcb0e-20c1-4273-b96d-23986a3ca37b,},Annotations:map[string]string{io.kubernetes.container.hash: a45a37d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9324d341344de77c4137f8e21af9c44c4c24536ebb71b25d265eb0d72f101a,PodSandboxId:7d8fae49a32cf74e1170d02874e88a4a8b4cb56e55445668a9ecc12b97bd0290,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693432808197771457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bfc5d77b-babb-4038-ad77-a226f68bf053,},Annotations:map[string]string{io.kubernetes.container.hash: 8ad48639,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16218f8a9fbeeaa2e03ff77c8fdfe188367f13f92d10be6d61cba395f682655c,PodSandboxId:51daa1e6df95e43a4d99332a8c24515fb190107d41ae0decbc417eb7b967fb00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1693432807681617358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ss8jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e9c7421-aa35-4b98-a722-2c2cbb2fff45,},Annotations:map[string]string{io.kubernetes.container.hash: ecb57cba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bc3c9cbec5c39551f4497cba83297f9164041fcab8bf597bcc62864e0b7ae3,PodSandboxId:f39757e5869ecb0e6731c372fc7d3ea6d2e2173a531c0fc529d4e50c134f6ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1693432799689262889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250f1691fd
0e14109f4cfaacd997d996,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fc44830dee77364bebeec54ce3c59ce4677bfafa24cfedfa989c0e8f3a32f2,PodSandboxId:afb8d08a69a1212dec225815304840d0e5421065b450bb8c145be44d15feb96e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1693432799616562945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0519d2d59566f7da35b38f394bc12ea3,},Annotations:map[string]string
{io.kubernetes.container.hash: cfe3380d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3d3c98f3dcb503226a1a2bffd9bef1e80d0e255d3df52fb3e562f81093a9a7,PodSandboxId:ff7223c8e7640e374529d0a6eb4e471928436b09fc47ab15f739f12ecd1de221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1693432799480629841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0af60cf64b7e991f2659fe20071e2d6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272d09a6f1909b072fd0d2cd6fab3007127bc936af9fde075313217e753c525d,PodSandboxId:aed321f59aa9dbdd539dd277379a87c07a693202359a4539e0c2cdc6a80c8c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1693432799460375982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100b032803cfef2a834020218e3187db,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=205db40b-1662-4a0f-9094-4a5f6216acc5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.724893567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=86db57ac-6993-48c7-b8f3-a61a2a7bc37d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.724956503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=86db57ac-6993-48c7-b8f3-a61a2a7bc37d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.725150643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf72e9717e10c4519a2f2b1115b85bc48fd04ac70586cddd2e804501addcd3a3,PodSandboxId:9ee833329a230db4366d37e927b195b20b1b2b142fb3650355cd8e4b332aff30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1693432810761019031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9qkv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ebcb0e-20c1-4273-b96d-23986a3ca37b,},Annotations:map[string]string{io.kubernetes.container.hash: a45a37d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9324d341344de77c4137f8e21af9c44c4c24536ebb71b25d265eb0d72f101a,PodSandboxId:7d8fae49a32cf74e1170d02874e88a4a8b4cb56e55445668a9ecc12b97bd0290,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693432808197771457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bfc5d77b-babb-4038-ad77-a226f68bf053,},Annotations:map[string]string{io.kubernetes.container.hash: 8ad48639,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16218f8a9fbeeaa2e03ff77c8fdfe188367f13f92d10be6d61cba395f682655c,PodSandboxId:51daa1e6df95e43a4d99332a8c24515fb190107d41ae0decbc417eb7b967fb00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1693432807681617358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ss8jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e9c7421-aa35-4b98-a722-2c2cbb2fff45,},Annotations:map[string]string{io.kubernetes.container.hash: ecb57cba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bc3c9cbec5c39551f4497cba83297f9164041fcab8bf597bcc62864e0b7ae3,PodSandboxId:f39757e5869ecb0e6731c372fc7d3ea6d2e2173a531c0fc529d4e50c134f6ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1693432799689262889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250f1691fd
0e14109f4cfaacd997d996,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fc44830dee77364bebeec54ce3c59ce4677bfafa24cfedfa989c0e8f3a32f2,PodSandboxId:afb8d08a69a1212dec225815304840d0e5421065b450bb8c145be44d15feb96e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1693432799616562945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0519d2d59566f7da35b38f394bc12ea3,},Annotations:map[string]string
{io.kubernetes.container.hash: cfe3380d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3d3c98f3dcb503226a1a2bffd9bef1e80d0e255d3df52fb3e562f81093a9a7,PodSandboxId:ff7223c8e7640e374529d0a6eb4e471928436b09fc47ab15f739f12ecd1de221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1693432799480629841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0af60cf64b7e991f2659fe20071e2d6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272d09a6f1909b072fd0d2cd6fab3007127bc936af9fde075313217e753c525d,PodSandboxId:aed321f59aa9dbdd539dd277379a87c07a693202359a4539e0c2cdc6a80c8c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1693432799460375982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100b032803cfef2a834020218e3187db,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=86db57ac-6993-48c7-b8f3-a61a2a7bc37d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.757560019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b891dff7-e86b-4cb0-985f-4fe98a61f76d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.757631196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b891dff7-e86b-4cb0-985f-4fe98a61f76d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.757792174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf72e9717e10c4519a2f2b1115b85bc48fd04ac70586cddd2e804501addcd3a3,PodSandboxId:9ee833329a230db4366d37e927b195b20b1b2b142fb3650355cd8e4b332aff30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1693432810761019031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9qkv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ebcb0e-20c1-4273-b96d-23986a3ca37b,},Annotations:map[string]string{io.kubernetes.container.hash: a45a37d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9324d341344de77c4137f8e21af9c44c4c24536ebb71b25d265eb0d72f101a,PodSandboxId:7d8fae49a32cf74e1170d02874e88a4a8b4cb56e55445668a9ecc12b97bd0290,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693432808197771457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bfc5d77b-babb-4038-ad77-a226f68bf053,},Annotations:map[string]string{io.kubernetes.container.hash: 8ad48639,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16218f8a9fbeeaa2e03ff77c8fdfe188367f13f92d10be6d61cba395f682655c,PodSandboxId:51daa1e6df95e43a4d99332a8c24515fb190107d41ae0decbc417eb7b967fb00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1693432807681617358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ss8jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e9c7421-aa35-4b98-a722-2c2cbb2fff45,},Annotations:map[string]string{io.kubernetes.container.hash: ecb57cba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bc3c9cbec5c39551f4497cba83297f9164041fcab8bf597bcc62864e0b7ae3,PodSandboxId:f39757e5869ecb0e6731c372fc7d3ea6d2e2173a531c0fc529d4e50c134f6ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1693432799689262889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250f1691fd
0e14109f4cfaacd997d996,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fc44830dee77364bebeec54ce3c59ce4677bfafa24cfedfa989c0e8f3a32f2,PodSandboxId:afb8d08a69a1212dec225815304840d0e5421065b450bb8c145be44d15feb96e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1693432799616562945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0519d2d59566f7da35b38f394bc12ea3,},Annotations:map[string]string
{io.kubernetes.container.hash: cfe3380d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3d3c98f3dcb503226a1a2bffd9bef1e80d0e255d3df52fb3e562f81093a9a7,PodSandboxId:ff7223c8e7640e374529d0a6eb4e471928436b09fc47ab15f739f12ecd1de221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1693432799480629841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0af60cf64b7e991f2659fe20071e2d6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272d09a6f1909b072fd0d2cd6fab3007127bc936af9fde075313217e753c525d,PodSandboxId:aed321f59aa9dbdd539dd277379a87c07a693202359a4539e0c2cdc6a80c8c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1693432799460375982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100b032803cfef2a834020218e3187db,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b891dff7-e86b-4cb0-985f-4fe98a61f76d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.796059628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=27a0cb5a-c4f4-4aa4-8946-f2785871f1e4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.796122179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=27a0cb5a-c4f4-4aa4-8946-f2785871f1e4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.796299146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf72e9717e10c4519a2f2b1115b85bc48fd04ac70586cddd2e804501addcd3a3,PodSandboxId:9ee833329a230db4366d37e927b195b20b1b2b142fb3650355cd8e4b332aff30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1693432810761019031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9qkv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ebcb0e-20c1-4273-b96d-23986a3ca37b,},Annotations:map[string]string{io.kubernetes.container.hash: a45a37d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9324d341344de77c4137f8e21af9c44c4c24536ebb71b25d265eb0d72f101a,PodSandboxId:7d8fae49a32cf74e1170d02874e88a4a8b4cb56e55445668a9ecc12b97bd0290,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693432808197771457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bfc5d77b-babb-4038-ad77-a226f68bf053,},Annotations:map[string]string{io.kubernetes.container.hash: 8ad48639,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16218f8a9fbeeaa2e03ff77c8fdfe188367f13f92d10be6d61cba395f682655c,PodSandboxId:51daa1e6df95e43a4d99332a8c24515fb190107d41ae0decbc417eb7b967fb00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1693432807681617358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ss8jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e9c7421-aa35-4b98-a722-2c2cbb2fff45,},Annotations:map[string]string{io.kubernetes.container.hash: ecb57cba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bc3c9cbec5c39551f4497cba83297f9164041fcab8bf597bcc62864e0b7ae3,PodSandboxId:f39757e5869ecb0e6731c372fc7d3ea6d2e2173a531c0fc529d4e50c134f6ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1693432799689262889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250f1691fd
0e14109f4cfaacd997d996,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fc44830dee77364bebeec54ce3c59ce4677bfafa24cfedfa989c0e8f3a32f2,PodSandboxId:afb8d08a69a1212dec225815304840d0e5421065b450bb8c145be44d15feb96e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1693432799616562945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0519d2d59566f7da35b38f394bc12ea3,},Annotations:map[string]string
{io.kubernetes.container.hash: cfe3380d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3d3c98f3dcb503226a1a2bffd9bef1e80d0e255d3df52fb3e562f81093a9a7,PodSandboxId:ff7223c8e7640e374529d0a6eb4e471928436b09fc47ab15f739f12ecd1de221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1693432799480629841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0af60cf64b7e991f2659fe20071e2d6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272d09a6f1909b072fd0d2cd6fab3007127bc936af9fde075313217e753c525d,PodSandboxId:aed321f59aa9dbdd539dd277379a87c07a693202359a4539e0c2cdc6a80c8c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1693432799460375982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100b032803cfef2a834020218e3187db,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=27a0cb5a-c4f4-4aa4-8946-f2785871f1e4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.828705084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1b388808-02a4-4a08-bf66-7b93b1238ea1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.828792558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1b388808-02a4-4a08-bf66-7b93b1238ea1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.829044539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf72e9717e10c4519a2f2b1115b85bc48fd04ac70586cddd2e804501addcd3a3,PodSandboxId:9ee833329a230db4366d37e927b195b20b1b2b142fb3650355cd8e4b332aff30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1693432810761019031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9qkv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ebcb0e-20c1-4273-b96d-23986a3ca37b,},Annotations:map[string]string{io.kubernetes.container.hash: a45a37d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9324d341344de77c4137f8e21af9c44c4c24536ebb71b25d265eb0d72f101a,PodSandboxId:7d8fae49a32cf74e1170d02874e88a4a8b4cb56e55445668a9ecc12b97bd0290,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693432808197771457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bfc5d77b-babb-4038-ad77-a226f68bf053,},Annotations:map[string]string{io.kubernetes.container.hash: 8ad48639,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16218f8a9fbeeaa2e03ff77c8fdfe188367f13f92d10be6d61cba395f682655c,PodSandboxId:51daa1e6df95e43a4d99332a8c24515fb190107d41ae0decbc417eb7b967fb00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1693432807681617358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ss8jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e9c7421-aa35-4b98-a722-2c2cbb2fff45,},Annotations:map[string]string{io.kubernetes.container.hash: ecb57cba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bc3c9cbec5c39551f4497cba83297f9164041fcab8bf597bcc62864e0b7ae3,PodSandboxId:f39757e5869ecb0e6731c372fc7d3ea6d2e2173a531c0fc529d4e50c134f6ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1693432799689262889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250f1691fd
0e14109f4cfaacd997d996,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fc44830dee77364bebeec54ce3c59ce4677bfafa24cfedfa989c0e8f3a32f2,PodSandboxId:afb8d08a69a1212dec225815304840d0e5421065b450bb8c145be44d15feb96e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1693432799616562945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0519d2d59566f7da35b38f394bc12ea3,},Annotations:map[string]string
{io.kubernetes.container.hash: cfe3380d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3d3c98f3dcb503226a1a2bffd9bef1e80d0e255d3df52fb3e562f81093a9a7,PodSandboxId:ff7223c8e7640e374529d0a6eb4e471928436b09fc47ab15f739f12ecd1de221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1693432799480629841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0af60cf64b7e991f2659fe20071e2d6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272d09a6f1909b072fd0d2cd6fab3007127bc936af9fde075313217e753c525d,PodSandboxId:aed321f59aa9dbdd539dd277379a87c07a693202359a4539e0c2cdc6a80c8c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1693432799460375982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100b032803cfef2a834020218e3187db,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1b388808-02a4-4a08-bf66-7b93b1238ea1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.866687087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=60e2ddea-e812-425e-99aa-5d0e36276cd1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.866768900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=60e2ddea-e812-425e-99aa-5d0e36276cd1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.866934907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf72e9717e10c4519a2f2b1115b85bc48fd04ac70586cddd2e804501addcd3a3,PodSandboxId:9ee833329a230db4366d37e927b195b20b1b2b142fb3650355cd8e4b332aff30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1693432810761019031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9qkv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ebcb0e-20c1-4273-b96d-23986a3ca37b,},Annotations:map[string]string{io.kubernetes.container.hash: a45a37d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9324d341344de77c4137f8e21af9c44c4c24536ebb71b25d265eb0d72f101a,PodSandboxId:7d8fae49a32cf74e1170d02874e88a4a8b4cb56e55445668a9ecc12b97bd0290,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693432808197771457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bfc5d77b-babb-4038-ad77-a226f68bf053,},Annotations:map[string]string{io.kubernetes.container.hash: 8ad48639,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16218f8a9fbeeaa2e03ff77c8fdfe188367f13f92d10be6d61cba395f682655c,PodSandboxId:51daa1e6df95e43a4d99332a8c24515fb190107d41ae0decbc417eb7b967fb00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1693432807681617358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ss8jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e9c7421-aa35-4b98-a722-2c2cbb2fff45,},Annotations:map[string]string{io.kubernetes.container.hash: ecb57cba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bc3c9cbec5c39551f4497cba83297f9164041fcab8bf597bcc62864e0b7ae3,PodSandboxId:f39757e5869ecb0e6731c372fc7d3ea6d2e2173a531c0fc529d4e50c134f6ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1693432799689262889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250f1691fd
0e14109f4cfaacd997d996,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fc44830dee77364bebeec54ce3c59ce4677bfafa24cfedfa989c0e8f3a32f2,PodSandboxId:afb8d08a69a1212dec225815304840d0e5421065b450bb8c145be44d15feb96e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1693432799616562945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0519d2d59566f7da35b38f394bc12ea3,},Annotations:map[string]string
{io.kubernetes.container.hash: cfe3380d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3d3c98f3dcb503226a1a2bffd9bef1e80d0e255d3df52fb3e562f81093a9a7,PodSandboxId:ff7223c8e7640e374529d0a6eb4e471928436b09fc47ab15f739f12ecd1de221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1693432799480629841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0af60cf64b7e991f2659fe20071e2d6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272d09a6f1909b072fd0d2cd6fab3007127bc936af9fde075313217e753c525d,PodSandboxId:aed321f59aa9dbdd539dd277379a87c07a693202359a4539e0c2cdc6a80c8c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1693432799460375982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100b032803cfef2a834020218e3187db,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=60e2ddea-e812-425e-99aa-5d0e36276cd1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.899957414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5f2d423b-d299-4b1d-b81a-6da226bcc4e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.900024746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5f2d423b-d299-4b1d-b81a-6da226bcc4e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.900211421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf72e9717e10c4519a2f2b1115b85bc48fd04ac70586cddd2e804501addcd3a3,PodSandboxId:9ee833329a230db4366d37e927b195b20b1b2b142fb3650355cd8e4b332aff30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1693432810761019031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9qkv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ebcb0e-20c1-4273-b96d-23986a3ca37b,},Annotations:map[string]string{io.kubernetes.container.hash: a45a37d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9324d341344de77c4137f8e21af9c44c4c24536ebb71b25d265eb0d72f101a,PodSandboxId:7d8fae49a32cf74e1170d02874e88a4a8b4cb56e55445668a9ecc12b97bd0290,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693432808197771457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bfc5d77b-babb-4038-ad77-a226f68bf053,},Annotations:map[string]string{io.kubernetes.container.hash: 8ad48639,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16218f8a9fbeeaa2e03ff77c8fdfe188367f13f92d10be6d61cba395f682655c,PodSandboxId:51daa1e6df95e43a4d99332a8c24515fb190107d41ae0decbc417eb7b967fb00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1693432807681617358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ss8jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e9c7421-aa35-4b98-a722-2c2cbb2fff45,},Annotations:map[string]string{io.kubernetes.container.hash: ecb57cba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bc3c9cbec5c39551f4497cba83297f9164041fcab8bf597bcc62864e0b7ae3,PodSandboxId:f39757e5869ecb0e6731c372fc7d3ea6d2e2173a531c0fc529d4e50c134f6ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1693432799689262889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250f1691fd
0e14109f4cfaacd997d996,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fc44830dee77364bebeec54ce3c59ce4677bfafa24cfedfa989c0e8f3a32f2,PodSandboxId:afb8d08a69a1212dec225815304840d0e5421065b450bb8c145be44d15feb96e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1693432799616562945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0519d2d59566f7da35b38f394bc12ea3,},Annotations:map[string]string
{io.kubernetes.container.hash: cfe3380d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3d3c98f3dcb503226a1a2bffd9bef1e80d0e255d3df52fb3e562f81093a9a7,PodSandboxId:ff7223c8e7640e374529d0a6eb4e471928436b09fc47ab15f739f12ecd1de221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1693432799480629841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0af60cf64b7e991f2659fe20071e2d6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272d09a6f1909b072fd0d2cd6fab3007127bc936af9fde075313217e753c525d,PodSandboxId:aed321f59aa9dbdd539dd277379a87c07a693202359a4539e0c2cdc6a80c8c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1693432799460375982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100b032803cfef2a834020218e3187db,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f2d423b-d299-4b1d-b81a-6da226bcc4e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.928313619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6fbb3770-e667-4d3a-b01d-db6eb3d82fdb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.928373993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6fbb3770-e667-4d3a-b01d-db6eb3d82fdb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:00:21 test-preload-229573 crio[703]: time="2023-08-30 22:00:21.928644037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf72e9717e10c4519a2f2b1115b85bc48fd04ac70586cddd2e804501addcd3a3,PodSandboxId:9ee833329a230db4366d37e927b195b20b1b2b142fb3650355cd8e4b332aff30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1693432810761019031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9qkv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ebcb0e-20c1-4273-b96d-23986a3ca37b,},Annotations:map[string]string{io.kubernetes.container.hash: a45a37d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a9324d341344de77c4137f8e21af9c44c4c24536ebb71b25d265eb0d72f101a,PodSandboxId:7d8fae49a32cf74e1170d02874e88a4a8b4cb56e55445668a9ecc12b97bd0290,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693432808197771457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: bfc5d77b-babb-4038-ad77-a226f68bf053,},Annotations:map[string]string{io.kubernetes.container.hash: 8ad48639,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16218f8a9fbeeaa2e03ff77c8fdfe188367f13f92d10be6d61cba395f682655c,PodSandboxId:51daa1e6df95e43a4d99332a8c24515fb190107d41ae0decbc417eb7b967fb00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1693432807681617358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ss8jg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
4e9c7421-aa35-4b98-a722-2c2cbb2fff45,},Annotations:map[string]string{io.kubernetes.container.hash: ecb57cba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bc3c9cbec5c39551f4497cba83297f9164041fcab8bf597bcc62864e0b7ae3,PodSandboxId:f39757e5869ecb0e6731c372fc7d3ea6d2e2173a531c0fc529d4e50c134f6ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1693432799689262889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 250f1691fd
0e14109f4cfaacd997d996,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fc44830dee77364bebeec54ce3c59ce4677bfafa24cfedfa989c0e8f3a32f2,PodSandboxId:afb8d08a69a1212dec225815304840d0e5421065b450bb8c145be44d15feb96e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1693432799616562945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0519d2d59566f7da35b38f394bc12ea3,},Annotations:map[string]string
{io.kubernetes.container.hash: cfe3380d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3d3c98f3dcb503226a1a2bffd9bef1e80d0e255d3df52fb3e562f81093a9a7,PodSandboxId:ff7223c8e7640e374529d0a6eb4e471928436b09fc47ab15f739f12ecd1de221,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1693432799480629841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0af60cf64b7e991f2659fe20071e2d6,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:272d09a6f1909b072fd0d2cd6fab3007127bc936af9fde075313217e753c525d,PodSandboxId:aed321f59aa9dbdd539dd277379a87c07a693202359a4539e0c2cdc6a80c8c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1693432799460375982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-229573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100b032803cfef2a834020218e3187db,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6fbb3770-e667-4d3a-b01d-db6eb3d82fdb name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	cf72e9717e10c       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   11 seconds ago      Running             coredns                   1                   9ee833329a230
	8a9324d341344       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   7d8fae49a32cf
	16218f8a9fbee       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   51daa1e6df95e
	97bc3c9cbec5c       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   22 seconds ago      Running             kube-scheduler            1                   f39757e5869ec
	97fc44830dee7       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   22 seconds ago      Running             etcd                      1                   afb8d08a69a12
	8a3d3c98f3dcb       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   22 seconds ago      Running             kube-controller-manager   1                   ff7223c8e7640
	272d09a6f1909       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   aed321f59aa9d
	
	* 
	* ==> coredns [cf72e9717e10c4519a2f2b1115b85bc48fd04ac70586cddd2e804501addcd3a3] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:38580 - 20640 "HINFO IN 7409524991837578437.502976120664415815. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01360766s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-229573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-229573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=test-preload-229573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T21_58_39_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:58:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-229573
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 22:00:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:00:16 +0000   Wed, 30 Aug 2023 21:58:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:00:16 +0000   Wed, 30 Aug 2023 21:58:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:00:16 +0000   Wed, 30 Aug 2023 21:58:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:00:16 +0000   Wed, 30 Aug 2023 22:00:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    test-preload-229573
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a2f4b98e6ca4ea4ac300ac56a29ce75
	  System UUID:                5a2f4b98-e6ca-4ea4-ac30-0ac56a29ce75
	  Boot ID:                    9da615c0-1341-4ce0-89e2-4e794b39dec4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9qkv2                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     90s
	  kube-system                 etcd-test-preload-229573                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kube-apiserver-test-preload-229573             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-controller-manager-test-preload-229573    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-ss8jg                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-scheduler-test-preload-229573             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 86s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  113s (x5 over 113s)  kubelet          Node test-preload-229573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x4 over 113s)  kubelet          Node test-preload-229573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x4 over 113s)  kubelet          Node test-preload-229573 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node test-preload-229573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node test-preload-229573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node test-preload-229573 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                93s                  kubelet          Node test-preload-229573 status is now: NodeReady
	  Normal  RegisteredNode           91s                  node-controller  Node test-preload-229573 event: Registered Node test-preload-229573 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node test-preload-229573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node test-preload-229573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node test-preload-229573 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-229573 event: Registered Node test-preload-229573 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug30 21:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072979] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.295612] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.262566] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158154] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.660306] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.058740] systemd-fstab-generator[629]: Ignoring "noauto" for root device
	[  +0.108369] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.141168] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.119376] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.221750] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[ +23.423517] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[Aug30 22:00] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.967494] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [97fc44830dee77364bebeec54ce3c59ce4677bfafa24cfedfa989c0e8f3a32f2] <==
	* {"level":"info","ts":"2023-08-30T22:00:01.572Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"fa515506e66f6916","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-08-30T22:00:01.581Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-30T22:00:01.582Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fa515506e66f6916","initial-advertise-peer-urls":["https://192.168.39.128:2380"],"listen-peer-urls":["https://192.168.39.128:2380"],"advertise-client-urls":["https://192.168.39.128:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.128:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-30T22:00:01.582Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-30T22:00:01.582Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2023-08-30T22:00:01.582Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2023-08-30T22:00:01.583Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-08-30T22:00:01.583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 switched to configuration voters=(18037291470719772950)"}
	{"level":"info","ts":"2023-08-30T22:00:01.584Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b64da5b92548cbb8","local-member-id":"fa515506e66f6916","added-peer-id":"fa515506e66f6916","added-peer-peer-urls":["https://192.168.39.128:2380"]}
	{"level":"info","ts":"2023-08-30T22:00:01.586Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b64da5b92548cbb8","local-member-id":"fa515506e66f6916","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:00:01.587Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:00:03.146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-30T22:00:03.146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-30T22:00:03.146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 received MsgPreVoteResp from fa515506e66f6916 at term 2"}
	{"level":"info","ts":"2023-08-30T22:00:03.146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 became candidate at term 3"}
	{"level":"info","ts":"2023-08-30T22:00:03.146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 received MsgVoteResp from fa515506e66f6916 at term 3"}
	{"level":"info","ts":"2023-08-30T22:00:03.146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 became leader at term 3"}
	{"level":"info","ts":"2023-08-30T22:00:03.146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fa515506e66f6916 elected leader fa515506e66f6916 at term 3"}
	{"level":"info","ts":"2023-08-30T22:00:03.147Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"fa515506e66f6916","local-member-attributes":"{Name:test-preload-229573 ClientURLs:[https://192.168.39.128:2379]}","request-path":"/0/members/fa515506e66f6916/attributes","cluster-id":"b64da5b92548cbb8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T22:00:03.147Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:00:03.148Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:00:03.149Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.128:2379"}
	{"level":"info","ts":"2023-08-30T22:00:03.149Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T22:00:03.149Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-30T22:00:03.149Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:00:22 up 1 min,  0 users,  load average: 1.14, 0.33, 0.11
	Linux test-preload-229573 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [272d09a6f1909b072fd0d2cd6fab3007127bc936af9fde075313217e753c525d] <==
	* I0830 22:00:05.606327       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0830 22:00:05.570847       1 apf_controller.go:317] Starting API Priority and Fairness config controller
	I0830 22:00:05.617919       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0830 22:00:05.617963       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0830 22:00:05.572301       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0830 22:00:05.618309       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0830 22:00:05.718311       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0830 22:00:05.718363       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0830 22:00:05.773724       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0830 22:00:05.776315       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0830 22:00:05.781953       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0830 22:00:05.799580       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0830 22:00:05.799890       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0830 22:00:05.800090       1 cache.go:39] Caches are synced for autoregister controller
	I0830 22:00:05.832028       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0830 22:00:06.254048       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0830 22:00:06.593719       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0830 22:00:07.508735       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0830 22:00:07.520007       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0830 22:00:07.579882       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0830 22:00:07.611783       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0830 22:00:07.622700       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0830 22:00:08.211143       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0830 22:00:18.062409       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0830 22:00:18.097731       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [8a3d3c98f3dcb503226a1a2bffd9bef1e80d0e255d3df52fb3e562f81093a9a7] <==
	* I0830 22:00:18.052987       1 shared_informer.go:262] Caches are synced for PVC protection
	I0830 22:00:18.055000       1 shared_informer.go:262] Caches are synced for PV protection
	I0830 22:00:18.057514       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0830 22:00:18.076648       1 shared_informer.go:262] Caches are synced for namespace
	I0830 22:00:18.085000       1 shared_informer.go:262] Caches are synced for HPA
	I0830 22:00:18.086671       1 shared_informer.go:262] Caches are synced for stateful set
	I0830 22:00:18.087658       1 shared_informer.go:262] Caches are synced for endpoint
	I0830 22:00:18.089591       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0830 22:00:18.094247       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0830 22:00:18.094995       1 shared_informer.go:262] Caches are synced for persistent volume
	I0830 22:00:18.097997       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0830 22:00:18.101785       1 shared_informer.go:262] Caches are synced for crt configmap
	I0830 22:00:18.102042       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0830 22:00:18.102096       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0830 22:00:18.102124       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0830 22:00:18.104973       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0830 22:00:18.110812       1 shared_informer.go:262] Caches are synced for TTL
	I0830 22:00:18.119081       1 shared_informer.go:262] Caches are synced for ephemeral
	I0830 22:00:18.145388       1 shared_informer.go:262] Caches are synced for attach detach
	I0830 22:00:18.275482       1 shared_informer.go:262] Caches are synced for resource quota
	I0830 22:00:18.287141       1 shared_informer.go:262] Caches are synced for cronjob
	I0830 22:00:18.300635       1 shared_informer.go:262] Caches are synced for resource quota
	I0830 22:00:18.739507       1 shared_informer.go:262] Caches are synced for garbage collector
	I0830 22:00:18.777109       1 shared_informer.go:262] Caches are synced for garbage collector
	I0830 22:00:18.777190       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [16218f8a9fbeeaa2e03ff77c8fdfe188367f13f92d10be6d61cba395f682655c] <==
	* I0830 22:00:08.113498       1 node.go:163] Successfully retrieved node IP: 192.168.39.128
	I0830 22:00:08.113585       1 server_others.go:138] "Detected node IP" address="192.168.39.128"
	I0830 22:00:08.113623       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0830 22:00:08.179205       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0830 22:00:08.179248       1 server_others.go:206] "Using iptables Proxier"
	I0830 22:00:08.179277       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0830 22:00:08.180616       1 server.go:661] "Version info" version="v1.24.4"
	I0830 22:00:08.180718       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:00:08.186701       1 config.go:317] "Starting service config controller"
	I0830 22:00:08.187114       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0830 22:00:08.187629       1 config.go:226] "Starting endpoint slice config controller"
	I0830 22:00:08.187664       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0830 22:00:08.194906       1 config.go:444] "Starting node config controller"
	I0830 22:00:08.195031       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0830 22:00:08.288585       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0830 22:00:08.288681       1 shared_informer.go:262] Caches are synced for service config
	I0830 22:00:08.295658       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [97bc3c9cbec5c39551f4497cba83297f9164041fcab8bf597bcc62864e0b7ae3] <==
	* I0830 22:00:01.837728       1 serving.go:348] Generated self-signed cert in-memory
	W0830 22:00:05.644948       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0830 22:00:05.645579       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 22:00:05.645719       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0830 22:00:05.645852       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0830 22:00:05.689875       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0830 22:00:05.689920       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:00:05.695490       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0830 22:00:05.695681       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0830 22:00:05.695740       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 22:00:05.705535       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0830 22:00:05.796623       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 21:59:22 UTC, ends at Wed 2023-08-30 22:00:22 UTC. --
	Aug 30 22:00:05 test-preload-229573 kubelet[1096]: E0830 22:00:05.683596    1096 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Aug 30 22:00:05 test-preload-229573 kubelet[1096]: I0830 22:00:05.814483    1096 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-229573"
	Aug 30 22:00:05 test-preload-229573 kubelet[1096]: I0830 22:00:05.814918    1096 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-229573"
	Aug 30 22:00:05 test-preload-229573 kubelet[1096]: I0830 22:00:05.818296    1096 setters.go:532] "Node became not ready" node="test-preload-229573" condition={Type:Ready Status:False LastHeartbeatTime:2023-08-30 22:00:05.818249213 +0000 UTC m=+7.767494558 LastTransitionTime:2023-08-30 22:00:05.818249213 +0000 UTC m=+7.767494558 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.227729    1096 apiserver.go:52] "Watching apiserver"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.232157    1096 topology_manager.go:200] "Topology Admit Handler"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.232228    1096 topology_manager.go:200] "Topology Admit Handler"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.232260    1096 topology_manager.go:200] "Topology Admit Handler"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: E0830 22:00:06.233358    1096 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9qkv2" podUID=34ebcb0e-20c1-4273-b96d-23986a3ca37b
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.348954    1096 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=eac08540-1c6e-4fa7-8eb7-df71c6a4196a path="/var/lib/kubelet/pods/eac08540-1c6e-4fa7-8eb7-df71c6a4196a/volumes"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.396593    1096 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34ebcb0e-20c1-4273-b96d-23986a3ca37b-config-volume\") pod \"coredns-6d4b75cb6d-9qkv2\" (UID: \"34ebcb0e-20c1-4273-b96d-23986a3ca37b\") " pod="kube-system/coredns-6d4b75cb6d-9qkv2"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.396661    1096 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e9c7421-aa35-4b98-a722-2c2cbb2fff45-xtables-lock\") pod \"kube-proxy-ss8jg\" (UID: \"4e9c7421-aa35-4b98-a722-2c2cbb2fff45\") " pod="kube-system/kube-proxy-ss8jg"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.396698    1096 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e9c7421-aa35-4b98-a722-2c2cbb2fff45-lib-modules\") pod \"kube-proxy-ss8jg\" (UID: \"4e9c7421-aa35-4b98-a722-2c2cbb2fff45\") " pod="kube-system/kube-proxy-ss8jg"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.396722    1096 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e9c7421-aa35-4b98-a722-2c2cbb2fff45-kube-proxy\") pod \"kube-proxy-ss8jg\" (UID: \"4e9c7421-aa35-4b98-a722-2c2cbb2fff45\") " pod="kube-system/kube-proxy-ss8jg"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.396750    1096 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9hdn\" (UniqueName: \"kubernetes.io/projected/34ebcb0e-20c1-4273-b96d-23986a3ca37b-kube-api-access-r9hdn\") pod \"coredns-6d4b75cb6d-9qkv2\" (UID: \"34ebcb0e-20c1-4273-b96d-23986a3ca37b\") " pod="kube-system/coredns-6d4b75cb6d-9qkv2"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.396792    1096 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bfc5d77b-babb-4038-ad77-a226f68bf053-tmp\") pod \"storage-provisioner\" (UID: \"bfc5d77b-babb-4038-ad77-a226f68bf053\") " pod="kube-system/storage-provisioner"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.396823    1096 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wwch\" (UniqueName: \"kubernetes.io/projected/bfc5d77b-babb-4038-ad77-a226f68bf053-kube-api-access-7wwch\") pod \"storage-provisioner\" (UID: \"bfc5d77b-babb-4038-ad77-a226f68bf053\") " pod="kube-system/storage-provisioner"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.396841    1096 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp2xp\" (UniqueName: \"kubernetes.io/projected/4e9c7421-aa35-4b98-a722-2c2cbb2fff45-kube-api-access-dp2xp\") pod \"kube-proxy-ss8jg\" (UID: \"4e9c7421-aa35-4b98-a722-2c2cbb2fff45\") " pod="kube-system/kube-proxy-ss8jg"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: I0830 22:00:06.396851    1096 reconciler.go:159] "Reconciler: start to sync state"
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: E0830 22:00:06.502222    1096 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 30 22:00:06 test-preload-229573 kubelet[1096]: E0830 22:00:06.502323    1096 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/34ebcb0e-20c1-4273-b96d-23986a3ca37b-config-volume podName:34ebcb0e-20c1-4273-b96d-23986a3ca37b nodeName:}" failed. No retries permitted until 2023-08-30 22:00:07.002305479 +0000 UTC m=+8.951550826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34ebcb0e-20c1-4273-b96d-23986a3ca37b-config-volume") pod "coredns-6d4b75cb6d-9qkv2" (UID: "34ebcb0e-20c1-4273-b96d-23986a3ca37b") : object "kube-system"/"coredns" not registered
	Aug 30 22:00:07 test-preload-229573 kubelet[1096]: E0830 22:00:07.003726    1096 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 30 22:00:07 test-preload-229573 kubelet[1096]: E0830 22:00:07.003817    1096 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/34ebcb0e-20c1-4273-b96d-23986a3ca37b-config-volume podName:34ebcb0e-20c1-4273-b96d-23986a3ca37b nodeName:}" failed. No retries permitted until 2023-08-30 22:00:08.00380223 +0000 UTC m=+9.953047577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34ebcb0e-20c1-4273-b96d-23986a3ca37b-config-volume") pod "coredns-6d4b75cb6d-9qkv2" (UID: "34ebcb0e-20c1-4273-b96d-23986a3ca37b") : object "kube-system"/"coredns" not registered
	Aug 30 22:00:08 test-preload-229573 kubelet[1096]: E0830 22:00:08.011727    1096 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 30 22:00:08 test-preload-229573 kubelet[1096]: E0830 22:00:08.011794    1096 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/34ebcb0e-20c1-4273-b96d-23986a3ca37b-config-volume podName:34ebcb0e-20c1-4273-b96d-23986a3ca37b nodeName:}" failed. No retries permitted until 2023-08-30 22:00:10.011780464 +0000 UTC m=+11.961025810 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34ebcb0e-20c1-4273-b96d-23986a3ca37b-config-volume") pod "coredns-6d4b75cb6d-9qkv2" (UID: "34ebcb0e-20c1-4273-b96d-23986a3ca37b") : object "kube-system"/"coredns" not registered
	
	* 
	* ==> storage-provisioner [8a9324d341344de77c4137f8e21af9c44c4c24536ebb71b25d265eb0d72f101a] <==
	* I0830 22:00:08.308146       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-229573 -n test-preload-229573
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-229573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-229573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-229573
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-229573: (1.121761523s)
--- FAIL: TestPreload (178.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (164.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.6.2.125882958.exe start -p running-upgrade-132310 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0830 22:02:25.784850  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.6.2.125882958.exe start -p running-upgrade-132310 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m10.323763117s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-132310 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-132310 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (32.159642555s)

                                                
                                                
-- stdout --
	* [running-upgrade-132310] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-132310 in cluster running-upgrade-132310
	* Updating the running kvm2 "running-upgrade-132310" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:04:32.685218  986851 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:04:32.685387  986851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:04:32.685413  986851 out.go:309] Setting ErrFile to fd 2...
	I0830 22:04:32.685425  986851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:04:32.685653  986851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:04:32.686240  986851 out.go:303] Setting JSON to false
	I0830 22:04:32.687302  986851 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13620,"bootTime":1693419453,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:04:32.687373  986851 start.go:138] virtualization: kvm guest
	I0830 22:04:32.690017  986851 out.go:177] * [running-upgrade-132310] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:04:32.692210  986851 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:04:32.692138  986851 notify.go:220] Checking for updates...
	I0830 22:04:32.693806  986851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:04:32.695382  986851 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:04:32.697075  986851 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:04:32.698617  986851 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:04:32.700770  986851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:04:32.702978  986851 config.go:182] Loaded profile config "running-upgrade-132310": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0830 22:04:32.703007  986851 start_flags.go:683] config upgrade: Driver=kvm2
	I0830 22:04:32.703023  986851 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec
	I0830 22:04:32.703169  986851 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/running-upgrade-132310/config.json ...
	I0830 22:04:32.704030  986851 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:04:32.704117  986851 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:04:32.725327  986851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43441
	I0830 22:04:32.725901  986851 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:04:32.726467  986851 main.go:141] libmachine: Using API Version  1
	I0830 22:04:32.726489  986851 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:04:32.726961  986851 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:04:32.727143  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .DriverName
	I0830 22:04:32.729581  986851 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:04:32.732612  986851 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:04:32.732943  986851 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:04:32.732993  986851 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:04:32.758605  986851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40823
	I0830 22:04:32.759395  986851 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:04:32.760417  986851 main.go:141] libmachine: Using API Version  1
	I0830 22:04:32.760442  986851 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:04:32.760977  986851 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:04:32.761200  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .DriverName
	I0830 22:04:32.816446  986851 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:04:32.818192  986851 start.go:298] selected driver: kvm2
	I0830 22:04:32.818222  986851 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-132310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.239 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0}
	I0830 22:04:32.818368  986851 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:04:32.819484  986851 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:32.819588  986851 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:04:32.841150  986851 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:04:32.841621  986851 cni.go:84] Creating CNI manager for ""
	I0830 22:04:32.841643  986851 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0830 22:04:32.841658  986851 start_flags.go:319] config:
	{Name:running-upgrade-132310 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.239 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:04:32.841925  986851 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:32.843628  986851 out.go:177] * Starting control plane node running-upgrade-132310 in cluster running-upgrade-132310
	I0830 22:04:32.844890  986851 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0830 22:04:32.869097  986851 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0830 22:04:32.869277  986851 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/running-upgrade-132310/config.json ...
	I0830 22:04:32.869587  986851 start.go:365] acquiring machines lock for running-upgrade-132310: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:04:32.869841  986851 cache.go:107] acquiring lock: {Name:mk8758de4aa8ff5b09eddf5bc54aa3ef01f9619f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:32.869851  986851 cache.go:107] acquiring lock: {Name:mk5fb9174ffe5538125b9391acd56cb4ff21190a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:32.869890  986851 cache.go:107] acquiring lock: {Name:mk53d9bea7eb288d239b90fec14991b1efceb816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:32.869932  986851 cache.go:115] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0830 22:04:32.869945  986851 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.863µs
	I0830 22:04:32.869956  986851 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0830 22:04:32.869954  986851 cache.go:107] acquiring lock: {Name:mkd4f2e347e5b06ec55ec1a362b8a6c991ffb953 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:32.869977  986851 cache.go:107] acquiring lock: {Name:mk3ccaf475c79baee8d592c56b122abce59bccdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:32.869997  986851 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0830 22:04:32.870025  986851 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0830 22:04:32.870058  986851 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0830 22:04:32.870065  986851 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0830 22:04:32.870208  986851 cache.go:107] acquiring lock: {Name:mk4bb0267e4a0b3947f63fcd62c5844494da2100 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:32.870235  986851 cache.go:107] acquiring lock: {Name:mke23a3138081d4605dbb1360cc0a5d22aa60c7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:32.870214  986851 cache.go:107] acquiring lock: {Name:mk94057b3441d3b8971fc1414c1cfe3b4efe11a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:32.870290  986851 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0830 22:04:32.870317  986851 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0830 22:04:32.870334  986851 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0830 22:04:32.871354  986851 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0830 22:04:32.871682  986851 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0830 22:04:32.871698  986851 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0830 22:04:32.871933  986851 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0830 22:04:32.871965  986851 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0830 22:04:32.871969  986851 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0830 22:04:32.871355  986851 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0830 22:04:33.040790  986851 cache.go:162] opening:  /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0830 22:04:33.048968  986851 cache.go:162] opening:  /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0830 22:04:33.049834  986851 cache.go:162] opening:  /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0830 22:04:33.060496  986851 cache.go:162] opening:  /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0830 22:04:33.063639  986851 cache.go:162] opening:  /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0830 22:04:33.069616  986851 cache.go:162] opening:  /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0830 22:04:33.105818  986851 cache.go:162] opening:  /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0830 22:04:33.138553  986851 cache.go:157] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0830 22:04:33.138581  986851 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 268.659397ms
	I0830 22:04:33.138596  986851 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0830 22:04:33.574505  986851 cache.go:157] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0830 22:04:33.574537  986851 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 704.340029ms
	I0830 22:04:33.574551  986851 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0830 22:04:33.848014  986851 cache.go:157] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0830 22:04:33.848057  986851 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 978.217675ms
	I0830 22:04:33.848076  986851 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0830 22:04:34.196321  986851 cache.go:157] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0830 22:04:34.196353  986851 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.326481752s
	I0830 22:04:34.196370  986851 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0830 22:04:34.226871  986851 cache.go:157] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0830 22:04:34.226923  986851 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.356691076s
	I0830 22:04:34.226941  986851 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0830 22:04:34.577859  986851 cache.go:157] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0830 22:04:34.577892  986851 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.707921753s
	I0830 22:04:34.577908  986851 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0830 22:04:34.632685  986851 cache.go:157] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0830 22:04:34.632714  986851 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.762505407s
	I0830 22:04:34.632726  986851 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0830 22:04:34.632743  986851 cache.go:87] Successfully saved all images to host disk.
	I0830 22:05:01.018940  986851 start.go:369] acquired machines lock for "running-upgrade-132310" in 28.149296864s
	I0830 22:05:01.019039  986851 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:05:01.019065  986851 fix.go:54] fixHost starting: minikube
	I0830 22:05:01.019522  986851 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:05:01.019557  986851 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:05:01.038177  986851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41497
	I0830 22:05:01.038727  986851 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:05:01.039203  986851 main.go:141] libmachine: Using API Version  1
	I0830 22:05:01.039227  986851 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:05:01.043901  986851 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:05:01.044308  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .DriverName
	I0830 22:05:01.044491  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetState
	I0830 22:05:01.047102  986851 fix.go:102] recreateIfNeeded on running-upgrade-132310: state=Running err=<nil>
	W0830 22:05:01.047127  986851 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:05:01.049152  986851 out.go:177] * Updating the running kvm2 "running-upgrade-132310" VM ...
	I0830 22:05:01.050816  986851 machine.go:88] provisioning docker machine ...
	I0830 22:05:01.050844  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .DriverName
	I0830 22:05:01.052940  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetMachineName
	I0830 22:05:01.053176  986851 buildroot.go:166] provisioning hostname "running-upgrade-132310"
	I0830 22:05:01.053207  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetMachineName
	I0830 22:05:01.053341  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHHostname
	I0830 22:05:01.057109  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.057687  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:01.057722  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.057917  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHPort
	I0830 22:05:01.059013  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:01.059212  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:01.059330  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHUsername
	I0830 22:05:01.059525  986851 main.go:141] libmachine: Using SSH client type: native
	I0830 22:05:01.060258  986851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I0830 22:05:01.060275  986851 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-132310 && echo "running-upgrade-132310" | sudo tee /etc/hostname
	I0830 22:05:01.307136  986851 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-132310
	
	I0830 22:05:01.307179  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHHostname
	I0830 22:05:01.310316  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.310748  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:01.310774  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.311167  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHPort
	I0830 22:05:01.311392  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:01.311566  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:01.311745  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHUsername
	I0830 22:05:01.311952  986851 main.go:141] libmachine: Using SSH client type: native
	I0830 22:05:01.312618  986851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I0830 22:05:01.312648  986851 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-132310' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-132310/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-132310' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:05:01.475144  986851 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:05:01.475183  986851 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:05:01.475227  986851 buildroot.go:174] setting up certificates
	I0830 22:05:01.475239  986851 provision.go:83] configureAuth start
	I0830 22:05:01.475254  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetMachineName
	I0830 22:05:01.475598  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetIP
	I0830 22:05:01.479219  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.479656  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:01.479683  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.479989  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHHostname
	I0830 22:05:01.482990  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.483562  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:01.483673  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.484154  986851 provision.go:138] copyHostCerts
	I0830 22:05:01.484217  986851 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:05:01.484228  986851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:05:01.484296  986851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:05:01.484415  986851 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:05:01.484423  986851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:05:01.484453  986851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:05:01.484542  986851 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:05:01.484548  986851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:05:01.484577  986851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:05:01.484638  986851 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-132310 san=[192.168.50.239 192.168.50.239 localhost 127.0.0.1 minikube running-upgrade-132310]
	I0830 22:05:01.563371  986851 provision.go:172] copyRemoteCerts
	I0830 22:05:01.563500  986851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:05:01.563560  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHHostname
	I0830 22:05:01.566894  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.567486  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:01.567536  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.567823  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHPort
	I0830 22:05:01.568047  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:01.568234  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHUsername
	I0830 22:05:01.568442  986851 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/running-upgrade-132310/id_rsa Username:docker}
	I0830 22:05:01.685556  986851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:05:01.726861  986851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:05:01.749402  986851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:05:01.769772  986851 provision.go:86] duration metric: configureAuth took 294.518611ms
	I0830 22:05:01.769816  986851 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:05:01.770014  986851 config.go:182] Loaded profile config "running-upgrade-132310": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0830 22:05:01.770122  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHHostname
	I0830 22:05:01.773192  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.773616  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:01.773645  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:01.773959  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHPort
	I0830 22:05:01.774190  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:01.774358  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:01.774506  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHUsername
	I0830 22:05:01.774666  986851 main.go:141] libmachine: Using SSH client type: native
	I0830 22:05:01.775067  986851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I0830 22:05:01.775089  986851 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:05:02.652012  986851 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:05:02.652044  986851 machine.go:91] provisioned docker machine in 1.601211965s
	I0830 22:05:02.652057  986851 start.go:300] post-start starting for "running-upgrade-132310" (driver="kvm2")
	I0830 22:05:02.652071  986851 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:05:02.652112  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .DriverName
	I0830 22:05:02.652514  986851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:05:02.652557  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHHostname
	I0830 22:05:02.655241  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:02.655747  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:02.655800  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:02.655928  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHPort
	I0830 22:05:02.656140  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:02.656308  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHUsername
	I0830 22:05:02.656452  986851 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/running-upgrade-132310/id_rsa Username:docker}
	I0830 22:05:02.748035  986851 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:05:02.781780  986851 info.go:137] Remote host: Buildroot 2019.02.7
	I0830 22:05:02.781811  986851 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:05:02.781896  986851 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:05:02.782022  986851 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:05:02.782164  986851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:05:02.789421  986851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:05:02.807807  986851 start.go:303] post-start completed in 155.731161ms
	I0830 22:05:02.807837  986851 fix.go:56] fixHost completed within 1.788772452s
	I0830 22:05:02.807862  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHHostname
	I0830 22:05:02.810181  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:02.810452  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:02.810489  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:02.810622  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHPort
	I0830 22:05:02.810819  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:02.810982  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:02.811090  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHUsername
	I0830 22:05:02.811250  986851 main.go:141] libmachine: Using SSH client type: native
	I0830 22:05:02.811683  986851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I0830 22:05:02.811695  986851 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0830 22:05:02.941396  986851 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433102.937386948
	
	I0830 22:05:02.941424  986851 fix.go:206] guest clock: 1693433102.937386948
	I0830 22:05:02.941437  986851 fix.go:219] Guest: 2023-08-30 22:05:02.937386948 +0000 UTC Remote: 2023-08-30 22:05:02.807841708 +0000 UTC m=+30.188751449 (delta=129.54524ms)
	I0830 22:05:02.941464  986851 fix.go:190] guest clock delta is within tolerance: 129.54524ms
	I0830 22:05:02.941471  986851 start.go:83] releasing machines lock for "running-upgrade-132310", held for 1.922496311s
	I0830 22:05:02.941502  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .DriverName
	I0830 22:05:02.941794  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetIP
	I0830 22:05:02.944694  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:02.945086  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:02.945115  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:02.945316  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .DriverName
	I0830 22:05:02.945916  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .DriverName
	I0830 22:05:02.946117  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .DriverName
	I0830 22:05:02.946201  986851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:05:02.946263  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHHostname
	I0830 22:05:02.946490  986851 ssh_runner.go:195] Run: cat /version.json
	I0830 22:05:02.946534  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHHostname
	I0830 22:05:02.949280  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:02.949402  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:02.949679  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:02.949720  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:02.949748  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:fe:90", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:02:51 +0000 UTC Type:0 Mac:52:54:00:2f:fe:90 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-132310 Clientid:01:52:54:00:2f:fe:90}
	I0830 22:05:02.949767  986851 main.go:141] libmachine: (running-upgrade-132310) DBG | domain running-upgrade-132310 has defined IP address 192.168.50.239 and MAC address 52:54:00:2f:fe:90 in network minikube-net
	I0830 22:05:02.949922  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHPort
	I0830 22:05:02.950060  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHPort
	I0830 22:05:02.950114  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:02.950280  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHUsername
	I0830 22:05:02.950287  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHKeyPath
	I0830 22:05:02.950426  986851 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/running-upgrade-132310/id_rsa Username:docker}
	I0830 22:05:02.951121  986851 main.go:141] libmachine: (running-upgrade-132310) Calling .GetSSHUsername
	I0830 22:05:02.951330  986851 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/running-upgrade-132310/id_rsa Username:docker}
	W0830 22:05:03.076384  986851 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0830 22:05:03.076472  986851 ssh_runner.go:195] Run: systemctl --version
	I0830 22:05:03.082059  986851 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:05:03.149868  986851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:05:03.156046  986851 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:05:03.156125  986851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:05:03.161967  986851 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0830 22:05:03.161994  986851 start.go:466] detecting cgroup driver to use...
	I0830 22:05:03.162050  986851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:05:03.174638  986851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:05:03.185081  986851 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:05:03.185135  986851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:05:03.197558  986851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:05:03.208175  986851 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0830 22:05:03.219406  986851 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0830 22:05:03.219483  986851 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:05:03.368740  986851 docker.go:212] disabling docker service ...
	I0830 22:05:03.368825  986851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:05:04.394747  986851 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.025890418s)
	I0830 22:05:04.394840  986851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:05:04.406333  986851 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:05:04.543245  986851 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:05:04.724647  986851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:05:04.733922  986851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:05:04.748284  986851 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0830 22:05:04.748363  986851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:05:04.759504  986851 out.go:177] 
	W0830 22:05:04.761125  986851 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0830 22:05:04.761150  986851 out.go:239] * 
	* 
	W0830 22:05:04.762248  986851 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:05:04.763617  986851 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-132310 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-30 22:05:04.786832028 +0000 UTC m=+3346.979581992
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-132310 -n running-upgrade-132310
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-132310 -n running-upgrade-132310: exit status 4 (304.718458ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:05:05.046433  987388 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-132310" does not appear in /home/jenkins/minikube-integration/17114-955377/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-132310" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-132310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-132310
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-132310: (1.54981243s)
--- FAIL: TestRunningBinaryUpgrade (164.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (293.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.6.2.607457117.exe start -p stopped-upgrade-184733 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.6.2.607457117.exe start -p stopped-upgrade-184733 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.255464132s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.6.2.607457117.exe -p stopped-upgrade-184733 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.6.2.607457117.exe -p stopped-upgrade-184733 stop: (1m32.769297507s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-184733 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-184733 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m12.415356999s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-184733] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-184733 in cluster stopped-upgrade-184733
	* Restarting existing kvm2 VM for "stopped-upgrade-184733" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:07:52.844387  991847 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:07:52.844544  991847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:07:52.844557  991847 out.go:309] Setting ErrFile to fd 2...
	I0830 22:07:52.844563  991847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:07:52.844765  991847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:07:52.845300  991847 out.go:303] Setting JSON to false
	I0830 22:07:52.846288  991847 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13820,"bootTime":1693419453,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:07:52.846348  991847 start.go:138] virtualization: kvm guest
	I0830 22:07:52.849088  991847 out.go:177] * [stopped-upgrade-184733] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:07:52.850676  991847 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:07:52.850760  991847 notify.go:220] Checking for updates...
	I0830 22:07:52.852197  991847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:07:52.854080  991847 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:07:52.855626  991847 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:07:52.856991  991847 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:07:52.858526  991847 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:07:52.860432  991847 config.go:182] Loaded profile config "stopped-upgrade-184733": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0830 22:07:52.860458  991847 start_flags.go:683] config upgrade: Driver=kvm2
	I0830 22:07:52.860469  991847 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec
	I0830 22:07:52.860552  991847 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/stopped-upgrade-184733/config.json ...
	I0830 22:07:52.861275  991847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:52.861394  991847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:52.876730  991847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I0830 22:07:52.877140  991847 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:52.877695  991847 main.go:141] libmachine: Using API Version  1
	I0830 22:07:52.877715  991847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:52.878027  991847 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:52.878205  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .DriverName
	I0830 22:07:52.880194  991847 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:07:52.881566  991847 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:07:52.881856  991847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:52.881895  991847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:52.896233  991847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42167
	I0830 22:07:52.896681  991847 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:52.897150  991847 main.go:141] libmachine: Using API Version  1
	I0830 22:07:52.897171  991847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:52.897501  991847 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:52.897698  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .DriverName
	I0830 22:07:52.933428  991847 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:07:52.934972  991847 start.go:298] selected driver: kvm2
	I0830 22:07:52.935002  991847 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-184733 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.72 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0}
	I0830 22:07:52.935205  991847 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:07:52.936046  991847 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:07:52.936118  991847 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:07:52.951826  991847 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:07:52.952204  991847 cni.go:84] Creating CNI manager for ""
	I0830 22:07:52.952219  991847 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0830 22:07:52.952228  991847 start_flags.go:319] config:
	{Name:stopped-upgrade-184733 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.72 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:07:52.952449  991847 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:07:52.954604  991847 out.go:177] * Starting control plane node stopped-upgrade-184733 in cluster stopped-upgrade-184733
	I0830 22:07:52.956049  991847 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0830 22:07:52.981753  991847 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0830 22:07:52.981874  991847 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/stopped-upgrade-184733/config.json ...
	I0830 22:07:52.982025  991847 cache.go:107] acquiring lock: {Name:mk5fb9174ffe5538125b9391acd56cb4ff21190a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:07:52.982056  991847 cache.go:107] acquiring lock: {Name:mk3ccaf475c79baee8d592c56b122abce59bccdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:07:52.982055  991847 cache.go:107] acquiring lock: {Name:mkd4f2e347e5b06ec55ec1a362b8a6c991ffb953 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:07:52.982099  991847 start.go:365] acquiring machines lock for stopped-upgrade-184733: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:07:52.982110  991847 cache.go:115] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0830 22:07:52.982122  991847 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 113.533µs
	I0830 22:07:52.982132  991847 cache.go:115] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0830 22:07:52.982134  991847 cache.go:115] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0830 22:07:52.982108  991847 cache.go:107] acquiring lock: {Name:mk94057b3441d3b8971fc1414c1cfe3b4efe11a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:07:52.982146  991847 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0830 22:07:52.982144  991847 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 89.807µs
	I0830 22:07:52.982019  991847 cache.go:107] acquiring lock: {Name:mk8758de4aa8ff5b09eddf5bc54aa3ef01f9619f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:07:52.982155  991847 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0830 22:07:52.982145  991847 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 99.112µs
	I0830 22:07:52.982202  991847 cache.go:107] acquiring lock: {Name:mk53d9bea7eb288d239b90fec14991b1efceb816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:07:52.982241  991847 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0830 22:07:52.982156  991847 cache.go:107] acquiring lock: {Name:mk4bb0267e4a0b3947f63fcd62c5844494da2100 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:07:52.982233  991847 cache.go:107] acquiring lock: {Name:mke23a3138081d4605dbb1360cc0a5d22aa60c7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:07:52.982236  991847 cache.go:115] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0830 22:07:52.982332  991847 cache.go:115] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0830 22:07:52.982340  991847 cache.go:115] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0830 22:07:52.982334  991847 cache.go:115] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0830 22:07:52.982344  991847 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 270.543µs
	I0830 22:07:52.982349  991847 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 208.196µs
	I0830 22:07:52.982345  991847 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 225.035µs
	I0830 22:07:52.982358  991847 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0830 22:07:52.982352  991847 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 169.781µs
	I0830 22:07:52.982363  991847 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0830 22:07:52.982367  991847 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0830 22:07:52.982245  991847 cache.go:115] /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0830 22:07:52.982385  991847 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 377.198µs
	I0830 22:07:52.982398  991847 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0830 22:07:52.982360  991847 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0830 22:07:52.982407  991847 cache.go:87] Successfully saved all images to host disk.
	I0830 22:08:23.540628  991847 start.go:369] acquired machines lock for "stopped-upgrade-184733" in 30.558489158s
	I0830 22:08:23.540681  991847 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:08:23.540693  991847 fix.go:54] fixHost starting: minikube
	I0830 22:08:23.541136  991847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:08:23.541183  991847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:08:23.558131  991847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0830 22:08:23.558598  991847 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:08:23.559132  991847 main.go:141] libmachine: Using API Version  1
	I0830 22:08:23.559155  991847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:08:23.559531  991847 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:08:23.559736  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .DriverName
	I0830 22:08:23.559910  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetState
	I0830 22:08:23.561545  991847 fix.go:102] recreateIfNeeded on stopped-upgrade-184733: state=Stopped err=<nil>
	I0830 22:08:23.561584  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .DriverName
	W0830 22:08:23.561758  991847 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:08:23.563993  991847 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-184733" ...
	I0830 22:08:23.565503  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .Start
	I0830 22:08:23.565669  991847 main.go:141] libmachine: (stopped-upgrade-184733) Ensuring networks are active...
	I0830 22:08:23.566387  991847 main.go:141] libmachine: (stopped-upgrade-184733) Ensuring network default is active
	I0830 22:08:23.566717  991847 main.go:141] libmachine: (stopped-upgrade-184733) Ensuring network minikube-net is active
	I0830 22:08:23.567170  991847 main.go:141] libmachine: (stopped-upgrade-184733) Getting domain xml...
	I0830 22:08:23.567980  991847 main.go:141] libmachine: (stopped-upgrade-184733) Creating domain...
	I0830 22:08:25.086356  991847 main.go:141] libmachine: (stopped-upgrade-184733) Waiting to get IP...
	I0830 22:08:25.087453  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:25.088008  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:25.088109  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:25.088000  992346 retry.go:31] will retry after 222.388302ms: waiting for machine to come up
	I0830 22:08:25.312728  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:25.313213  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:25.313242  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:25.313174  992346 retry.go:31] will retry after 357.494463ms: waiting for machine to come up
	I0830 22:08:25.673101  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:25.673669  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:25.673698  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:25.673617  992346 retry.go:31] will retry after 434.531197ms: waiting for machine to come up
	I0830 22:08:26.110538  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:26.111177  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:26.111204  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:26.111081  992346 retry.go:31] will retry after 399.192715ms: waiting for machine to come up
	I0830 22:08:26.512670  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:26.513182  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:26.513207  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:26.513138  992346 retry.go:31] will retry after 541.99991ms: waiting for machine to come up
	I0830 22:08:27.057055  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:27.057602  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:27.057637  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:27.057531  992346 retry.go:31] will retry after 946.019191ms: waiting for machine to come up
	I0830 22:08:28.005724  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:28.006442  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:28.006477  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:28.006380  992346 retry.go:31] will retry after 930.480344ms: waiting for machine to come up
	I0830 22:08:28.939158  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:28.939707  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:28.939741  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:28.939637  992346 retry.go:31] will retry after 1.100407749s: waiting for machine to come up
	I0830 22:08:30.041841  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:30.042372  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:30.042424  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:30.042326  992346 retry.go:31] will retry after 1.186044152s: waiting for machine to come up
	I0830 22:08:31.230579  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:31.231070  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:31.231096  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:31.231006  992346 retry.go:31] will retry after 1.764742674s: waiting for machine to come up
	I0830 22:08:32.997209  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:32.997696  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:32.997725  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:32.997627  992346 retry.go:31] will retry after 2.606370247s: waiting for machine to come up
	I0830 22:08:35.606206  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:35.606724  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:35.606754  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:35.606680  992346 retry.go:31] will retry after 2.520645019s: waiting for machine to come up
	I0830 22:08:38.129126  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:38.129722  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:38.129756  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:38.129662  992346 retry.go:31] will retry after 4.362833634s: waiting for machine to come up
	I0830 22:08:42.497099  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:42.497649  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:42.497681  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:42.497595  992346 retry.go:31] will retry after 4.212140874s: waiting for machine to come up
	I0830 22:08:46.713313  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:46.713783  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:46.713827  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:46.713730  992346 retry.go:31] will retry after 6.70128163s: waiting for machine to come up
	I0830 22:08:53.420365  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:08:53.420825  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | unable to find current IP address of domain stopped-upgrade-184733 in network minikube-net
	I0830 22:08:53.420855  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | I0830 22:08:53.420777  992346 retry.go:31] will retry after 8.78445381s: waiting for machine to come up
	I0830 22:09:02.209120  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.209592  991847 main.go:141] libmachine: (stopped-upgrade-184733) Found IP for machine: 192.168.50.72
	I0830 22:09:02.209614  991847 main.go:141] libmachine: (stopped-upgrade-184733) Reserving static IP address...
	I0830 22:09:02.209640  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has current primary IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.210106  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "stopped-upgrade-184733", mac: "52:54:00:dc:03:ed", ip: "192.168.50.72"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:02.210132  991847 main.go:141] libmachine: (stopped-upgrade-184733) Reserved static IP address: 192.168.50.72
	I0830 22:09:02.210145  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-184733", mac: "52:54:00:dc:03:ed", ip: "192.168.50.72"}
	I0830 22:09:02.210156  991847 main.go:141] libmachine: (stopped-upgrade-184733) Waiting for SSH to be available...
	I0830 22:09:02.210174  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | Getting to WaitForSSH function...
	I0830 22:09:02.212175  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.212561  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:02.212600  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.212718  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | Using SSH client type: external
	I0830 22:09:02.212741  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/stopped-upgrade-184733/id_rsa (-rw-------)
	I0830 22:09:02.212771  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/stopped-upgrade-184733/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:09:02.212788  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | About to run SSH command:
	I0830 22:09:02.212799  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | exit 0
	I0830 22:09:02.347460  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | SSH cmd err, output: <nil>: 
	I0830 22:09:02.347851  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetConfigRaw
	I0830 22:09:02.348616  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetIP
	I0830 22:09:02.351230  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.351632  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:02.351686  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.351997  991847 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/stopped-upgrade-184733/config.json ...
	I0830 22:09:02.352208  991847 machine.go:88] provisioning docker machine ...
	I0830 22:09:02.352230  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .DriverName
	I0830 22:09:02.352453  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetMachineName
	I0830 22:09:02.352643  991847 buildroot.go:166] provisioning hostname "stopped-upgrade-184733"
	I0830 22:09:02.352660  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetMachineName
	I0830 22:09:02.352831  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHHostname
	I0830 22:09:02.355013  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.355367  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:02.355395  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.355517  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHPort
	I0830 22:09:02.355714  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:02.355869  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:02.356062  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHUsername
	I0830 22:09:02.356263  991847 main.go:141] libmachine: Using SSH client type: native
	I0830 22:09:02.356743  991847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0830 22:09:02.356761  991847 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-184733 && echo "stopped-upgrade-184733" | sudo tee /etc/hostname
	I0830 22:09:02.482661  991847 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-184733
	
	I0830 22:09:02.482702  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHHostname
	I0830 22:09:02.485838  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.486266  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:02.486318  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.486465  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHPort
	I0830 22:09:02.486693  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:02.486837  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:02.486973  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHUsername
	I0830 22:09:02.487144  991847 main.go:141] libmachine: Using SSH client type: native
	I0830 22:09:02.487610  991847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0830 22:09:02.487629  991847 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-184733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-184733/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-184733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:09:02.612453  991847 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:09:02.612487  991847 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:09:02.612532  991847 buildroot.go:174] setting up certificates
	I0830 22:09:02.612545  991847 provision.go:83] configureAuth start
	I0830 22:09:02.612565  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetMachineName
	I0830 22:09:02.612888  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetIP
	I0830 22:09:02.615262  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.615660  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:02.615686  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.615874  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHHostname
	I0830 22:09:02.618118  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.618433  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:02.618466  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.618553  991847 provision.go:138] copyHostCerts
	I0830 22:09:02.618619  991847 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:09:02.618644  991847 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:09:02.618725  991847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:09:02.618868  991847 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:09:02.618878  991847 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:09:02.618918  991847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:09:02.619066  991847 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:09:02.619078  991847 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:09:02.619120  991847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:09:02.619208  991847 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-184733 san=[192.168.50.72 192.168.50.72 localhost 127.0.0.1 minikube stopped-upgrade-184733]
	I0830 22:09:02.964371  991847 provision.go:172] copyRemoteCerts
	I0830 22:09:02.964439  991847 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:09:02.964473  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHHostname
	I0830 22:09:02.967389  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.967759  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:02.967813  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:02.967979  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHPort
	I0830 22:09:02.968209  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:02.968416  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHUsername
	I0830 22:09:02.968577  991847 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/stopped-upgrade-184733/id_rsa Username:docker}
	I0830 22:09:03.058659  991847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:09:03.072863  991847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:09:03.086230  991847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:09:03.101295  991847 provision.go:86] duration metric: configureAuth took 488.718928ms
	I0830 22:09:03.101333  991847 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:09:03.101585  991847 config.go:182] Loaded profile config "stopped-upgrade-184733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0830 22:09:03.101689  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHHostname
	I0830 22:09:03.104955  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:03.105378  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:03.105442  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:03.105642  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHPort
	I0830 22:09:03.105833  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:03.106033  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:03.106189  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHUsername
	I0830 22:09:03.106397  991847 main.go:141] libmachine: Using SSH client type: native
	I0830 22:09:03.106996  991847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0830 22:09:03.107020  991847 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:09:04.093932  991847 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:09:04.093964  991847 machine.go:91] provisioned docker machine in 1.741740816s
	I0830 22:09:04.093978  991847 start.go:300] post-start starting for "stopped-upgrade-184733" (driver="kvm2")
	I0830 22:09:04.093992  991847 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:09:04.094035  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .DriverName
	I0830 22:09:04.094447  991847 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:09:04.094501  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHHostname
	I0830 22:09:04.097582  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:04.098094  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:04.098128  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:04.098299  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHPort
	I0830 22:09:04.098537  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:04.098760  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHUsername
	I0830 22:09:04.098922  991847 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/stopped-upgrade-184733/id_rsa Username:docker}
	I0830 22:09:04.190853  991847 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:09:04.196119  991847 info.go:137] Remote host: Buildroot 2019.02.7
	I0830 22:09:04.196164  991847 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:09:04.196246  991847 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:09:04.196347  991847 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:09:04.196480  991847 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:09:04.203163  991847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:09:04.220569  991847 start.go:303] post-start completed in 126.574559ms
	I0830 22:09:04.220598  991847 fix.go:56] fixHost completed within 40.679906659s
	I0830 22:09:04.220630  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHHostname
	I0830 22:09:04.223372  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:04.223821  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:04.223860  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:04.223970  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHPort
	I0830 22:09:04.224223  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:04.224403  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:04.224549  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHUsername
	I0830 22:09:04.224725  991847 main.go:141] libmachine: Using SSH client type: native
	I0830 22:09:04.225162  991847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.72 22 <nil> <nil>}
	I0830 22:09:04.225176  991847 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0830 22:09:04.353048  991847 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433344.291864683
	
	I0830 22:09:04.353078  991847 fix.go:206] guest clock: 1693433344.291864683
	I0830 22:09:04.353089  991847 fix.go:219] Guest: 2023-08-30 22:09:04.291864683 +0000 UTC Remote: 2023-08-30 22:09:04.220603193 +0000 UTC m=+71.426071103 (delta=71.26149ms)
	I0830 22:09:04.353125  991847 fix.go:190] guest clock delta is within tolerance: 71.26149ms
	I0830 22:09:04.353131  991847 start.go:83] releasing machines lock for "stopped-upgrade-184733", held for 40.812473583s
	I0830 22:09:04.353173  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .DriverName
	I0830 22:09:04.353518  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetIP
	I0830 22:09:04.357114  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:04.357606  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:04.357644  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:04.357951  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .DriverName
	I0830 22:09:04.358551  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .DriverName
	I0830 22:09:04.358740  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .DriverName
	I0830 22:09:04.358842  991847 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:09:04.358894  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHHostname
	I0830 22:09:04.358987  991847 ssh_runner.go:195] Run: cat /version.json
	I0830 22:09:04.359010  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHHostname
	I0830 22:09:04.361824  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:04.362044  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:04.362223  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:04.362258  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:04.362478  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:03:ed", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-08-30 23:08:50 +0000 UTC Type:0 Mac:52:54:00:dc:03:ed Iaid: IPaddr:192.168.50.72 Prefix:24 Hostname:stopped-upgrade-184733 Clientid:01:52:54:00:dc:03:ed}
	I0830 22:09:04.362514  991847 main.go:141] libmachine: (stopped-upgrade-184733) DBG | domain stopped-upgrade-184733 has defined IP address 192.168.50.72 and MAC address 52:54:00:dc:03:ed in network minikube-net
	I0830 22:09:04.362551  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHPort
	I0830 22:09:04.362665  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHPort
	I0830 22:09:04.362794  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:04.362875  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHKeyPath
	I0830 22:09:04.362959  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHUsername
	I0830 22:09:04.363039  991847 main.go:141] libmachine: (stopped-upgrade-184733) Calling .GetSSHUsername
	I0830 22:09:04.363118  991847 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/stopped-upgrade-184733/id_rsa Username:docker}
	I0830 22:09:04.363150  991847 sshutil.go:53] new ssh client: &{IP:192.168.50.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/stopped-upgrade-184733/id_rsa Username:docker}
	W0830 22:09:04.484761  991847 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0830 22:09:04.484845  991847 ssh_runner.go:195] Run: systemctl --version
	I0830 22:09:04.492468  991847 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:09:04.699715  991847 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:09:04.705596  991847 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:09:04.705687  991847 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:09:04.711552  991847 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0830 22:09:04.711578  991847 start.go:466] detecting cgroup driver to use...
	I0830 22:09:04.711643  991847 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:09:04.725560  991847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:09:04.737592  991847 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:09:04.737661  991847 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:09:04.749038  991847 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:09:04.759946  991847 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0830 22:09:04.770686  991847 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0830 22:09:04.770757  991847 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:09:04.887662  991847 docker.go:212] disabling docker service ...
	I0830 22:09:04.887767  991847 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:09:04.902213  991847 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:09:04.911195  991847 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:09:05.045451  991847 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:09:05.160978  991847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:09:05.171141  991847 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:09:05.184425  991847 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0830 22:09:05.184509  991847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:09:05.193630  991847 out.go:177] 
	W0830 22:09:05.195031  991847 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0830 22:09:05.195050  991847 out.go:239] * 
	* 
	W0830 22:09:05.196088  991847 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:09:05.197960  991847 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-184733 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (293.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-820510 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-820510 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.106116747s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-820510] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-820510 in cluster pause-820510
	* Updating the running kvm2 "pause-820510" VM ...
	* Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-820510" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:06:35.562341  990580 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:06:35.562481  990580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:06:35.562493  990580 out.go:309] Setting ErrFile to fd 2...
	I0830 22:06:35.562499  990580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:06:35.562853  990580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:06:35.563571  990580 out.go:303] Setting JSON to false
	I0830 22:06:35.564963  990580 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13743,"bootTime":1693419453,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:06:35.565048  990580 start.go:138] virtualization: kvm guest
	I0830 22:06:35.567842  990580 out.go:177] * [pause-820510] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:06:35.569546  990580 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:06:35.569610  990580 notify.go:220] Checking for updates...
	I0830 22:06:35.571126  990580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:06:35.573210  990580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:06:35.574705  990580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:06:35.576207  990580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:06:35.579750  990580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:06:35.581822  990580 config.go:182] Loaded profile config "pause-820510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:35.582465  990580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:06:35.582584  990580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:06:35.602027  990580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0830 22:06:35.602584  990580 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:06:35.603294  990580 main.go:141] libmachine: Using API Version  1
	I0830 22:06:35.603325  990580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:06:35.603801  990580 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:06:35.604000  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:35.604274  990580 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:06:35.604734  990580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:06:35.604775  990580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:06:35.623814  990580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40919
	I0830 22:06:35.624527  990580 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:06:35.625058  990580 main.go:141] libmachine: Using API Version  1
	I0830 22:06:35.625082  990580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:06:35.625496  990580 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:06:35.625649  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:35.668602  990580 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:06:35.670275  990580 start.go:298] selected driver: kvm2
	I0830 22:06:35.670297  990580 start.go:902] validating driver "kvm2" against &{Name:pause-820510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.1 ClusterName:pause-820510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-p
lugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:06:35.670495  990580 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:06:35.670932  990580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:06:35.671033  990580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:06:35.688304  990580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:06:35.689103  990580 cni.go:84] Creating CNI manager for ""
	I0830 22:06:35.689121  990580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:06:35.689134  990580 start_flags.go:319] config:
	{Name:pause-820510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-820510 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false
registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:06:35.689411  990580 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:06:35.691456  990580 out.go:177] * Starting control plane node pause-820510 in cluster pause-820510
	I0830 22:06:35.693061  990580 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:06:35.693122  990580 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0830 22:06:35.693145  990580 cache.go:57] Caching tarball of preloaded images
	I0830 22:06:35.693266  990580 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:06:35.693282  990580 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 22:06:35.693514  990580 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/config.json ...
	I0830 22:06:35.693797  990580 start.go:365] acquiring machines lock for pause-820510: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:06:45.585054  990580 start.go:369] acquired machines lock for "pause-820510" in 9.891182s
	I0830 22:06:45.585111  990580 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:06:45.585119  990580 fix.go:54] fixHost starting: 
	I0830 22:06:45.585512  990580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:06:45.585566  990580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:06:45.602470  990580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I0830 22:06:45.602909  990580 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:06:45.603471  990580 main.go:141] libmachine: Using API Version  1
	I0830 22:06:45.603500  990580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:06:45.603922  990580 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:06:45.604167  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:45.604337  990580 main.go:141] libmachine: (pause-820510) Calling .GetState
	I0830 22:06:45.606011  990580 fix.go:102] recreateIfNeeded on pause-820510: state=Running err=<nil>
	W0830 22:06:45.606028  990580 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:06:45.608283  990580 out.go:177] * Updating the running kvm2 "pause-820510" VM ...
	I0830 22:06:45.609755  990580 machine.go:88] provisioning docker machine ...
	I0830 22:06:45.609781  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:45.610019  990580 main.go:141] libmachine: (pause-820510) Calling .GetMachineName
	I0830 22:06:45.610208  990580 buildroot.go:166] provisioning hostname "pause-820510"
	I0830 22:06:45.610247  990580 main.go:141] libmachine: (pause-820510) Calling .GetMachineName
	I0830 22:06:45.610427  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:45.612864  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.613332  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.613366  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.613562  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:45.613747  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.613916  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.614067  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:45.614285  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:45.614720  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:45.614734  990580 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-820510 && echo "pause-820510" | sudo tee /etc/hostname
	I0830 22:06:45.761235  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-820510
	
	I0830 22:06:45.761263  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:45.764410  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.764838  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.764868  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.765095  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:45.765334  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.765531  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.765691  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:45.765905  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:45.766539  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:45.766571  990580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-820510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-820510/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-820510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:06:45.894801  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:06:45.894835  990580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:06:45.894861  990580 buildroot.go:174] setting up certificates
	I0830 22:06:45.894873  990580 provision.go:83] configureAuth start
	I0830 22:06:45.894923  990580 main.go:141] libmachine: (pause-820510) Calling .GetMachineName
	I0830 22:06:45.895267  990580 main.go:141] libmachine: (pause-820510) Calling .GetIP
	I0830 22:06:45.898467  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.898864  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.898894  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.899097  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:45.901866  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.902238  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.902269  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.902457  990580 provision.go:138] copyHostCerts
	I0830 22:06:45.902505  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:06:45.902522  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:06:45.902576  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:06:45.902678  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:06:45.902694  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:06:45.902715  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:06:45.902761  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:06:45.902768  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:06:45.902785  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:06:45.902823  990580 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.pause-820510 san=[192.168.72.94 192.168.72.94 localhost 127.0.0.1 minikube pause-820510]
	I0830 22:06:46.040935  990580 provision.go:172] copyRemoteCerts
	I0830 22:06:46.041000  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:06:46.041026  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:46.044126  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.044484  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:46.044520  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.044742  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:46.044890  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:46.045076  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:46.045232  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:46.148676  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:06:46.174085  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0830 22:06:46.199141  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:06:46.225569  990580 provision.go:86] duration metric: configureAuth took 330.678788ms
	I0830 22:06:46.225597  990580 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:06:46.225851  990580 config.go:182] Loaded profile config "pause-820510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:46.225968  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:46.229315  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.229785  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:46.229821  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.229973  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:46.230151  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:46.230363  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:46.230655  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:46.230866  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:46.231518  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:46.231545  990580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:06:54.161743  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:06:54.161776  990580 machine.go:91] provisioned docker machine in 8.552000473s
	I0830 22:06:54.161790  990580 start.go:300] post-start starting for "pause-820510" (driver="kvm2")
	I0830 22:06:54.161806  990580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:06:54.161829  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.162145  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:06:54.162173  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:54.165200  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.165622  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:54.165653  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.165846  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:54.166034  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:54.166232  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:54.166375  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:54.759254  990580 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:06:54.766926  990580 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:06:54.766956  990580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:06:54.767095  990580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:06:54.767212  990580 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:06:54.767327  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:06:54.788678  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:06:54.817090  990580 start.go:303] post-start completed in 655.283715ms
	I0830 22:06:54.817116  990580 fix.go:56] fixHost completed within 9.231998658s
	I0830 22:06:54.817139  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:54.820125  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.820521  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:54.820557  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.820836  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:54.821024  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:54.821190  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:54.821332  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:54.821500  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:54.822149  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:54.822169  990580 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0830 22:06:54.992566  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433214.989306072
	
	I0830 22:06:54.992595  990580 fix.go:206] guest clock: 1693433214.989306072
	I0830 22:06:54.992605  990580 fix.go:219] Guest: 2023-08-30 22:06:54.989306072 +0000 UTC Remote: 2023-08-30 22:06:54.817120079 +0000 UTC m=+19.323029239 (delta=172.185993ms)
	I0830 22:06:54.992633  990580 fix.go:190] guest clock delta is within tolerance: 172.185993ms
	I0830 22:06:54.992639  990580 start.go:83] releasing machines lock for "pause-820510", held for 9.407551984s
	I0830 22:06:54.992686  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.992956  990580 main.go:141] libmachine: (pause-820510) Calling .GetIP
	I0830 22:06:54.996069  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.996479  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:54.996510  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.996697  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.997247  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.997422  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.997512  990580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:06:54.997562  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:54.997639  990580 ssh_runner.go:195] Run: cat /version.json
	I0830 22:06:54.997656  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:55.000331  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.000570  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.000731  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:55.000790  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.000998  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:55.001213  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:55.001283  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:55.001301  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:55.001335  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.001453  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:55.001471  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:55.001594  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:55.001677  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:55.001718  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:55.163700  990580 ssh_runner.go:195] Run: systemctl --version
	I0830 22:06:55.177978  990580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:06:55.986097  990580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:06:56.015156  990580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:06:56.015282  990580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:06:56.109625  990580 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0830 22:06:56.109661  990580 start.go:466] detecting cgroup driver to use...
	I0830 22:06:56.109803  990580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:06:56.146125  990580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:06:56.170972  990580 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:06:56.171051  990580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:06:56.242531  990580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:06:56.275716  990580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:06:56.581802  990580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:06:56.815136  990580 docker.go:212] disabling docker service ...
	I0830 22:06:56.815243  990580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:06:56.836976  990580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:06:56.851767  990580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:06:57.098143  990580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:06:57.347551  990580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:06:57.371482  990580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:06:57.440051  990580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:06:57.440164  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.470626  990580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:06:57.470717  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.498416  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.527036  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.555430  990580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:06:57.583763  990580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:06:57.605280  990580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:06:57.638115  990580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:06:57.978882  990580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:07:07.412243  990580 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.433311903s)
	I0830 22:07:07.412282  990580 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:07:07.412346  990580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:07:07.419936  990580 start.go:534] Will wait 60s for crictl version
	I0830 22:07:07.420003  990580 ssh_runner.go:195] Run: which crictl
	I0830 22:07:07.425713  990580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:07:07.681641  990580 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:07:07.681755  990580 ssh_runner.go:195] Run: crio --version
	I0830 22:07:08.259976  990580 ssh_runner.go:195] Run: crio --version
	I0830 22:07:08.363331  990580 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:07:08.364997  990580 main.go:141] libmachine: (pause-820510) Calling .GetIP
	I0830 22:07:08.368430  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:07:08.368857  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:07:08.368893  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:07:08.369113  990580 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0830 22:07:08.378998  990580 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:07:08.379077  990580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:07:08.444088  990580 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:07:08.444115  990580 crio.go:415] Images already preloaded, skipping extraction
	I0830 22:07:08.444179  990580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:07:08.497435  990580 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:07:08.497464  990580 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:07:08.497564  990580 ssh_runner.go:195] Run: crio config
	I0830 22:07:08.612207  990580 cni.go:84] Creating CNI manager for ""
	I0830 22:07:08.612239  990580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:07:08.612267  990580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:07:08.612295  990580 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.94 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-820510 NodeName:pause-820510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:07:08.612513  990580 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-820510"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:07:08.612616  990580 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-820510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-820510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:07:08.612690  990580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:07:08.632244  990580 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:07:08.632339  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:07:08.652720  990580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0830 22:07:08.688698  990580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:07:08.726622  990580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0830 22:07:08.758923  990580 ssh_runner.go:195] Run: grep 192.168.72.94	control-plane.minikube.internal$ /etc/hosts
	I0830 22:07:08.770902  990580 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510 for IP: 192.168.72.94
	I0830 22:07:08.770952  990580 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:07:08.771139  990580 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:07:08.771204  990580 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:07:08.771295  990580 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/client.key
	I0830 22:07:08.771394  990580 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/apiserver.key.90452619
	I0830 22:07:08.771460  990580 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/proxy-client.key
	I0830 22:07:08.771611  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:07:08.771647  990580 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:07:08.771662  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:07:08.771695  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:07:08.771730  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:07:08.771764  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:07:08.771837  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:07:08.772744  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:07:08.818726  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:07:08.862923  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:07:08.911406  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:07:08.999894  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:07:09.058431  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:07:09.119865  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:07:09.190391  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:07:09.241029  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:07:09.293842  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:07:09.338532  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:07:09.393923  990580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:07:09.420541  990580 ssh_runner.go:195] Run: openssl version
	I0830 22:07:09.432288  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:07:09.451569  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:07:09.461237  990580 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:07:09.461322  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:07:09.472810  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:07:09.488771  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:07:09.507791  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:07:09.516037  990580 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:07:09.516106  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:07:09.523501  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:07:09.545329  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:07:09.565996  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:07:09.575701  990580 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:07:09.575768  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:07:09.588786  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:07:09.614023  990580 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:07:09.624374  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:07:09.636450  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:07:09.649864  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:07:09.661854  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:07:09.672722  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:07:09.686022  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:07:09.700215  990580 kubeadm.go:404] StartCluster: {Name:pause-820510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.1 ClusterName:pause-820510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:fa
lse pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:07:09.700375  990580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:07:09.700430  990580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:07:09.756078  990580 cri.go:89] found id: "1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3"
	I0830 22:07:09.756108  990580 cri.go:89] found id: "ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976"
	I0830 22:07:09.756115  990580 cri.go:89] found id: "aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	I0830 22:07:09.756121  990580 cri.go:89] found id: "b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495"
	I0830 22:07:09.756126  990580 cri.go:89] found id: "bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79"
	I0830 22:07:09.756133  990580 cri.go:89] found id: "9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639"
	I0830 22:07:09.756139  990580 cri.go:89] found id: ""
	I0830 22:07:09.756200  990580 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-820510 -n pause-820510
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-820510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-820510 logs -n 25: (1.295435833s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo docker                         | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo find                           | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo crio                           | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-051361                                     | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	| start   | -p cert-expiration-693390                            | cert-expiration-693390    | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:07 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-820510                                      | pause-820510              | jenkins | v1.31.2 | 30 Aug 23 22:06 UTC | 30 Aug 23 22:07 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-134135                          | force-systemd-env-134135  | jenkins | v1.31.2 | 30 Aug 23 22:06 UTC | 30 Aug 23 22:06 UTC |
	| start   | -p force-systemd-flag-882278                         | force-systemd-flag-882278 | jenkins | v1.31.2 | 30 Aug 23 22:06 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:06:44
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:06:44.280266  990773 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:06:44.280421  990773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:06:44.280431  990773 out.go:309] Setting ErrFile to fd 2...
	I0830 22:06:44.280439  990773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:06:44.280756  990773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:06:44.281463  990773 out.go:303] Setting JSON to false
	I0830 22:06:44.282779  990773 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13751,"bootTime":1693419453,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:06:44.282866  990773 start.go:138] virtualization: kvm guest
	I0830 22:06:44.286187  990773 out.go:177] * [force-systemd-flag-882278] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:06:44.288210  990773 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:06:44.289760  990773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:06:44.288305  990773 notify.go:220] Checking for updates...
	I0830 22:06:44.292277  990773 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:06:44.293740  990773 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:06:44.295073  990773 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:06:44.296424  990773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:06:44.298290  990773 config.go:182] Loaded profile config "cert-expiration-693390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:44.298505  990773 config.go:182] Loaded profile config "pause-820510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:44.298590  990773 config.go:182] Loaded profile config "stopped-upgrade-184733": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0830 22:06:44.298728  990773 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:06:44.336203  990773 out.go:177] * Using the kvm2 driver based on user configuration
	I0830 22:06:44.337754  990773 start.go:298] selected driver: kvm2
	I0830 22:06:44.337772  990773 start.go:902] validating driver "kvm2" against <nil>
	I0830 22:06:44.337786  990773 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:06:44.338633  990773 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:06:44.338708  990773 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:06:44.353848  990773 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:06:44.353888  990773 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 22:06:44.354081  990773 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0830 22:06:44.354110  990773 cni.go:84] Creating CNI manager for ""
	I0830 22:06:44.354119  990773 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:06:44.354127  990773 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0830 22:06:44.354133  990773 start_flags.go:319] config:
	{Name:force-systemd-flag-882278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-882278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:06:44.354267  990773 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:06:44.356107  990773 out.go:177] * Starting control plane node force-systemd-flag-882278 in cluster force-systemd-flag-882278
	I0830 22:06:43.308654  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:43.309028  990141 main.go:141] libmachine: (cert-expiration-693390) Found IP for machine: 192.168.61.85
	I0830 22:06:43.309042  990141 main.go:141] libmachine: (cert-expiration-693390) Reserving static IP address...
	I0830 22:06:43.309057  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has current primary IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:43.309345  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | unable to find host DHCP lease matching {name: "cert-expiration-693390", mac: "52:54:00:f5:14:e4", ip: "192.168.61.85"} in network mk-cert-expiration-693390
	I0830 22:06:44.034057  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Getting to WaitForSSH function...
	I0830 22:06:44.034083  990141 main.go:141] libmachine: (cert-expiration-693390) Reserved static IP address: 192.168.61.85
	I0830 22:06:44.034098  990141 main.go:141] libmachine: (cert-expiration-693390) Waiting for SSH to be available...
	I0830 22:06:44.036624  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.036998  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.037021  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.037182  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Using SSH client type: external
	I0830 22:06:44.037203  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa (-rw-------)
	I0830 22:06:44.037246  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.85 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:06:44.037266  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | About to run SSH command:
	I0830 22:06:44.037277  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | exit 0
	I0830 22:06:44.131904  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | SSH cmd err, output: <nil>: 
	I0830 22:06:44.132164  990141 main.go:141] libmachine: (cert-expiration-693390) KVM machine creation complete!
	I0830 22:06:44.132483  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetConfigRaw
	I0830 22:06:44.133066  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:44.133276  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:44.133420  990141 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0830 22:06:44.133433  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetState
	I0830 22:06:44.135015  990141 main.go:141] libmachine: Detecting operating system of created instance...
	I0830 22:06:44.135026  990141 main.go:141] libmachine: Waiting for SSH to be available...
	I0830 22:06:44.135034  990141 main.go:141] libmachine: Getting to WaitForSSH function...
	I0830 22:06:44.135043  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.137601  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.137933  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.137962  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.138121  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.138325  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.138487  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.138613  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.138796  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:44.139219  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:44.139225  990141 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0830 22:06:44.263077  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:06:44.263102  990141 main.go:141] libmachine: Detecting the provisioner...
	I0830 22:06:44.263112  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.266077  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.266386  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.266411  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.266564  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.266729  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.266878  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.266986  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.267177  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:44.267553  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:44.267563  990141 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0830 22:06:44.388718  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0830 22:06:44.388843  990141 main.go:141] libmachine: found compatible host: buildroot
	I0830 22:06:44.388852  990141 main.go:141] libmachine: Provisioning with buildroot...
	I0830 22:06:44.388864  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetMachineName
	I0830 22:06:44.389142  990141 buildroot.go:166] provisioning hostname "cert-expiration-693390"
	I0830 22:06:44.389161  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetMachineName
	I0830 22:06:44.389357  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.392315  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.392712  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.392734  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.392870  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.393083  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.393325  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.393492  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.393610  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:44.393997  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:44.394005  990141 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-693390 && echo "cert-expiration-693390" | sudo tee /etc/hostname
	I0830 22:06:44.520240  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-693390
	
	I0830 22:06:44.520271  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.523000  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.523379  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.523405  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.523571  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.523816  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.524033  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.524175  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.524311  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:44.524920  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:44.524941  990141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-693390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-693390/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-693390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:06:44.649707  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:06:44.649729  990141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:06:44.649756  990141 buildroot.go:174] setting up certificates
	I0830 22:06:44.649766  990141 provision.go:83] configureAuth start
	I0830 22:06:44.649775  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetMachineName
	I0830 22:06:44.650151  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetIP
	I0830 22:06:44.653121  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.653526  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.653562  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.653662  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.656009  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.656336  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.656354  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.656479  990141 provision.go:138] copyHostCerts
	I0830 22:06:44.656551  990141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:06:44.656567  990141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:06:44.656628  990141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:06:44.656733  990141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:06:44.656742  990141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:06:44.656764  990141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:06:44.656804  990141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:06:44.656806  990141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:06:44.656822  990141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:06:44.656856  990141 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-693390 san=[192.168.61.85 192.168.61.85 localhost 127.0.0.1 minikube cert-expiration-693390]
	I0830 22:06:44.822525  990141 provision.go:172] copyRemoteCerts
	I0830 22:06:44.822574  990141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:06:44.822599  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.825495  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.825790  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.825813  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.825989  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.826213  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.826393  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.826515  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:06:45.585054  990580 start.go:369] acquired machines lock for "pause-820510" in 9.891182s
	I0830 22:06:45.585111  990580 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:06:45.585119  990580 fix.go:54] fixHost starting: 
	I0830 22:06:45.585512  990580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:06:45.585566  990580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:06:45.602470  990580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I0830 22:06:45.602909  990580 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:06:45.603471  990580 main.go:141] libmachine: Using API Version  1
	I0830 22:06:45.603500  990580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:06:45.603922  990580 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:06:45.604167  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:45.604337  990580 main.go:141] libmachine: (pause-820510) Calling .GetState
	I0830 22:06:45.606011  990580 fix.go:102] recreateIfNeeded on pause-820510: state=Running err=<nil>
	W0830 22:06:45.606028  990580 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:06:45.608283  990580 out.go:177] * Updating the running kvm2 "pause-820510" VM ...
	I0830 22:06:44.913917  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:06:44.938697  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:06:44.961766  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:06:44.983227  990141 provision.go:86] duration metric: configureAuth took 333.446849ms
	I0830 22:06:44.983248  990141 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:06:44.983444  990141 config.go:182] Loaded profile config "cert-expiration-693390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:44.983520  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.986306  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.986622  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.986659  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.986843  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.987009  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.987177  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.987330  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.987476  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:44.987883  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:44.987893  990141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:06:45.330110  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:06:45.330130  990141 main.go:141] libmachine: Checking connection to Docker...
	I0830 22:06:45.330141  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetURL
	I0830 22:06:45.331518  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Using libvirt version 6000000
	I0830 22:06:45.333915  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.334291  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.334323  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.334485  990141 main.go:141] libmachine: Docker is up and running!
	I0830 22:06:45.334492  990141 main.go:141] libmachine: Reticulating splines...
	I0830 22:06:45.334497  990141 client.go:171] LocalClient.Create took 25.141183597s
	I0830 22:06:45.334521  990141 start.go:167] duration metric: libmachine.API.Create for "cert-expiration-693390" took 25.141238644s
	I0830 22:06:45.334530  990141 start.go:300] post-start starting for "cert-expiration-693390" (driver="kvm2")
	I0830 22:06:45.334541  990141 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:06:45.334560  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:45.334842  990141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:06:45.334862  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:45.336922  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.337260  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.337277  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.337430  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:45.337609  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:45.337782  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:45.337929  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:06:45.424687  990141 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:06:45.429031  990141 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:06:45.429049  990141 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:06:45.429105  990141 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:06:45.429191  990141 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:06:45.429298  990141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:06:45.437089  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:06:45.461099  990141 start.go:303] post-start completed in 126.555206ms
	I0830 22:06:45.461141  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetConfigRaw
	I0830 22:06:45.461804  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetIP
	I0830 22:06:45.464488  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.464872  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.464899  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.465110  990141 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/config.json ...
	I0830 22:06:45.465284  990141 start.go:128] duration metric: createHost completed in 25.29517827s
	I0830 22:06:45.465300  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:45.467514  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.467904  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.467920  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.468090  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:45.468286  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:45.468465  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:45.468590  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:45.468778  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:45.469159  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:45.469164  990141 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:06:45.584905  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433205.559243147
	
	I0830 22:06:45.584917  990141 fix.go:206] guest clock: 1693433205.559243147
	I0830 22:06:45.584923  990141 fix.go:219] Guest: 2023-08-30 22:06:45.559243147 +0000 UTC Remote: 2023-08-30 22:06:45.46528951 +0000 UTC m=+50.602366631 (delta=93.953637ms)
	I0830 22:06:45.584941  990141 fix.go:190] guest clock delta is within tolerance: 93.953637ms
	I0830 22:06:45.584945  990141 start.go:83] releasing machines lock for "cert-expiration-693390", held for 25.415017017s
	I0830 22:06:45.584967  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:45.585267  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetIP
	I0830 22:06:45.589991  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.590415  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.590460  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.590568  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:45.591110  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:45.591305  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:45.591405  990141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:06:45.591448  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:45.591556  990141 ssh_runner.go:195] Run: cat /version.json
	I0830 22:06:45.591578  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:45.594022  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.594343  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.594360  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.594466  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:45.594501  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.594630  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:45.594800  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:45.594898  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.594927  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.594929  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:06:45.595082  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:45.595222  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:45.595381  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:45.595502  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:06:45.705257  990141 ssh_runner.go:195] Run: systemctl --version
	I0830 22:06:45.710749  990141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:06:45.867834  990141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:06:45.874281  990141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:06:45.874358  990141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:06:45.892492  990141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:06:45.892509  990141 start.go:466] detecting cgroup driver to use...
	I0830 22:06:45.892580  990141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:06:45.910503  990141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:06:45.925507  990141 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:06:45.925561  990141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:06:45.940265  990141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:06:45.955901  990141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:06:46.066390  990141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:06:46.195421  990141 docker.go:212] disabling docker service ...
	I0830 22:06:46.195504  990141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:06:46.209707  990141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:06:46.221607  990141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:06:46.339598  990141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:06:46.456877  990141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:06:46.471592  990141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:06:46.492344  990141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:06:46.492397  990141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:46.502538  990141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:06:46.502586  990141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:46.511626  990141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:46.520818  990141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:46.530070  990141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:06:46.539656  990141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:06:46.547696  990141 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:06:46.547755  990141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:06:46.560736  990141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:06:46.569336  990141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:06:46.669331  990141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:06:46.826132  990141 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:06:46.826232  990141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:06:46.831011  990141 start.go:534] Will wait 60s for crictl version
	I0830 22:06:46.831060  990141 ssh_runner.go:195] Run: which crictl
	I0830 22:06:46.835061  990141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:06:46.867704  990141 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:06:46.867778  990141 ssh_runner.go:195] Run: crio --version
	I0830 22:06:46.912897  990141 ssh_runner.go:195] Run: crio --version
	I0830 22:06:46.966035  990141 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:06:44.357374  990773 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:06:44.357412  990773 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0830 22:06:44.357422  990773 cache.go:57] Caching tarball of preloaded images
	I0830 22:06:44.357497  990773 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:06:44.357507  990773 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 22:06:44.357597  990773 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/force-systemd-flag-882278/config.json ...
	I0830 22:06:44.357613  990773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/force-systemd-flag-882278/config.json: {Name:mk936d9606351e54c6245936e50fb75dfebaa0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:44.357736  990773 start.go:365] acquiring machines lock for force-systemd-flag-882278: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:06:46.967623  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetIP
	I0830 22:06:46.970474  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:46.970782  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:46.970805  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:46.971009  990141 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0830 22:06:46.975143  990141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:06:46.986860  990141 localpath.go:92] copying /home/jenkins/minikube-integration/17114-955377/.minikube/client.crt -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/client.crt
	I0830 22:06:46.987013  990141 localpath.go:117] copying /home/jenkins/minikube-integration/17114-955377/.minikube/client.key -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/client.key
	I0830 22:06:46.987191  990141 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:06:46.987242  990141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:06:47.015341  990141 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:06:47.015404  990141 ssh_runner.go:195] Run: which lz4
	I0830 22:06:47.019318  990141 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:06:47.023264  990141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:06:47.023294  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 22:06:48.849466  990141 crio.go:444] Took 1.830176 seconds to copy over tarball
	I0830 22:06:48.849524  990141 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:06:45.609755  990580 machine.go:88] provisioning docker machine ...
	I0830 22:06:45.609781  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:45.610019  990580 main.go:141] libmachine: (pause-820510) Calling .GetMachineName
	I0830 22:06:45.610208  990580 buildroot.go:166] provisioning hostname "pause-820510"
	I0830 22:06:45.610247  990580 main.go:141] libmachine: (pause-820510) Calling .GetMachineName
	I0830 22:06:45.610427  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:45.612864  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.613332  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.613366  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.613562  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:45.613747  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.613916  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.614067  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:45.614285  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:45.614720  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:45.614734  990580 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-820510 && echo "pause-820510" | sudo tee /etc/hostname
	I0830 22:06:45.761235  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-820510
	
	I0830 22:06:45.761263  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:45.764410  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.764838  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.764868  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.765095  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:45.765334  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.765531  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.765691  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:45.765905  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:45.766539  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:45.766571  990580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-820510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-820510/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-820510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:06:45.894801  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:06:45.894835  990580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:06:45.894861  990580 buildroot.go:174] setting up certificates
	I0830 22:06:45.894873  990580 provision.go:83] configureAuth start
	I0830 22:06:45.894923  990580 main.go:141] libmachine: (pause-820510) Calling .GetMachineName
	I0830 22:06:45.895267  990580 main.go:141] libmachine: (pause-820510) Calling .GetIP
	I0830 22:06:45.898467  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.898864  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.898894  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.899097  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:45.901866  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.902238  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.902269  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.902457  990580 provision.go:138] copyHostCerts
	I0830 22:06:45.902505  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:06:45.902522  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:06:45.902576  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:06:45.902678  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:06:45.902694  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:06:45.902715  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:06:45.902761  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:06:45.902768  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:06:45.902785  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:06:45.902823  990580 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.pause-820510 san=[192.168.72.94 192.168.72.94 localhost 127.0.0.1 minikube pause-820510]
	I0830 22:06:46.040935  990580 provision.go:172] copyRemoteCerts
	I0830 22:06:46.041000  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:06:46.041026  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:46.044126  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.044484  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:46.044520  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.044742  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:46.044890  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:46.045076  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:46.045232  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:46.148676  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:06:46.174085  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0830 22:06:46.199141  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:06:46.225569  990580 provision.go:86] duration metric: configureAuth took 330.678788ms
	I0830 22:06:46.225597  990580 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:06:46.225851  990580 config.go:182] Loaded profile config "pause-820510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:46.225968  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:46.229315  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.229785  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:46.229821  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.229973  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:46.230151  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:46.230363  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:46.230655  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:46.230866  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:46.231518  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:46.231545  990580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:06:51.716360  990141 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.866812305s)
	I0830 22:06:51.716378  990141 crio.go:451] Took 2.866893 seconds to extract the tarball
	I0830 22:06:51.716389  990141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:06:51.758108  990141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:06:51.880454  990141 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:06:51.880465  990141 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:06:51.880523  990141 ssh_runner.go:195] Run: crio config
	I0830 22:06:51.941934  990141 cni.go:84] Creating CNI manager for ""
	I0830 22:06:51.941947  990141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:06:51.941969  990141 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:06:51.942003  990141 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.85 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-693390 NodeName:cert-expiration-693390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:06:51.942173  990141 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-693390"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.85"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:06:51.942234  990141 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=cert-expiration-693390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:cert-expiration-693390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:06:51.942284  990141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:06:51.952345  990141 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:06:51.952421  990141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:06:51.961700  990141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0830 22:06:51.977545  990141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:06:51.994404  990141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0830 22:06:52.010899  990141 ssh_runner.go:195] Run: grep 192.168.61.85	control-plane.minikube.internal$ /etc/hosts
	I0830 22:06:52.014738  990141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:06:52.026182  990141 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390 for IP: 192.168.61.85
	I0830 22:06:52.026207  990141 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:52.026426  990141 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:06:52.026474  990141 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:06:52.026582  990141 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/client.key
	I0830 22:06:52.026604  990141 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key.230625e1
	I0830 22:06:52.026624  990141 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt.230625e1 with IP's: [192.168.61.85 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 22:06:52.170288  990141 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt.230625e1 ...
	I0830 22:06:52.170308  990141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt.230625e1: {Name:mka0c3818d2ac1dfff963b14a0e3d08ae46e9b22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:52.170503  990141 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key.230625e1 ...
	I0830 22:06:52.170514  990141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key.230625e1: {Name:mkd09917b21ea61e8da5a121404b3d8f775e9118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:52.170579  990141 certs.go:337] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt.230625e1 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt
	I0830 22:06:52.170630  990141 certs.go:341] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key.230625e1 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key
	I0830 22:06:52.170674  990141 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.key
	I0830 22:06:52.170683  990141 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.crt with IP's: []
	I0830 22:06:52.407395  990141 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.crt ...
	I0830 22:06:52.407413  990141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.crt: {Name:mk479d34b53aafd5d58997625c425792b53320da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:52.416804  990141 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.key ...
	I0830 22:06:52.416826  990141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.key: {Name:mk16c661e834a055c2ec5a63de9ff8e87ed06581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:52.417054  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:06:52.417102  990141 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:06:52.417113  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:06:52.417140  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:06:52.417170  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:06:52.417194  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:06:52.417248  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:06:52.418043  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:06:52.442772  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:06:52.465083  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:06:52.486921  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0830 22:06:52.508040  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:06:52.529546  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:06:52.550843  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:06:52.572823  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:06:52.596139  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:06:52.617611  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:06:52.639351  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:06:52.660678  990141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:06:52.675868  990141 ssh_runner.go:195] Run: openssl version
	I0830 22:06:52.681359  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:06:52.692582  990141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:06:52.697345  990141 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:06:52.697393  990141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:06:52.703179  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:06:52.714922  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:06:52.726606  990141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:06:52.731441  990141 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:06:52.731492  990141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:06:52.737190  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:06:52.747567  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:06:52.758269  990141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:06:52.762786  990141 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:06:52.762846  990141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:06:52.768268  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:06:52.779185  990141 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:06:52.783411  990141 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 22:06:52.783458  990141 kubeadm.go:404] StartCluster: {Name:cert-expiration-693390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.1 ClusterName:cert-expiration-693390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.85 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:06:52.783549  990141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:06:52.783596  990141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:06:52.822575  990141 cri.go:89] found id: ""
	I0830 22:06:52.822635  990141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:06:52.835565  990141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:06:52.848333  990141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:06:52.860305  990141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:06:52.860345  990141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 22:06:52.974126  990141 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 22:06:52.974250  990141 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:06:53.249122  990141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:06:53.249239  990141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:06:53.249335  990141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:06:53.442120  990141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:06:53.622059  990141 out.go:204]   - Generating certificates and keys ...
	I0830 22:06:53.622242  990141 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:06:53.622351  990141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:06:53.673430  990141 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 22:06:53.760222  990141 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 22:06:53.995408  990141 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 22:06:54.079659  990141 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 22:06:54.411095  990141 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 22:06:54.411612  990141 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-693390 localhost] and IPs [192.168.61.85 127.0.0.1 ::1]
	I0830 22:06:54.667920  990141 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 22:06:54.668467  990141 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-693390 localhost] and IPs [192.168.61.85 127.0.0.1 ::1]
	I0830 22:06:54.854520  990141 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 22:06:54.992756  990773 start.go:369] acquired machines lock for "force-systemd-flag-882278" in 10.634961869s
	I0830 22:06:54.992833  990773 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-882278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-882278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:06:54.992975  990773 start.go:125] createHost starting for "" (driver="kvm2")
	I0830 22:06:54.161743  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:06:54.161776  990580 machine.go:91] provisioned docker machine in 8.552000473s
	I0830 22:06:54.161790  990580 start.go:300] post-start starting for "pause-820510" (driver="kvm2")
	I0830 22:06:54.161806  990580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:06:54.161829  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.162145  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:06:54.162173  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:54.165200  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.165622  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:54.165653  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.165846  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:54.166034  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:54.166232  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:54.166375  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:54.759254  990580 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:06:54.766926  990580 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:06:54.766956  990580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:06:54.767095  990580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:06:54.767212  990580 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:06:54.767327  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:06:54.788678  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:06:54.817090  990580 start.go:303] post-start completed in 655.283715ms
	I0830 22:06:54.817116  990580 fix.go:56] fixHost completed within 9.231998658s
	I0830 22:06:54.817139  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:54.820125  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.820521  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:54.820557  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.820836  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:54.821024  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:54.821190  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:54.821332  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:54.821500  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:54.822149  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:54.822169  990580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:06:54.992566  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433214.989306072
	
	I0830 22:06:54.992595  990580 fix.go:206] guest clock: 1693433214.989306072
	I0830 22:06:54.992605  990580 fix.go:219] Guest: 2023-08-30 22:06:54.989306072 +0000 UTC Remote: 2023-08-30 22:06:54.817120079 +0000 UTC m=+19.323029239 (delta=172.185993ms)
	I0830 22:06:54.992633  990580 fix.go:190] guest clock delta is within tolerance: 172.185993ms
	I0830 22:06:54.992639  990580 start.go:83] releasing machines lock for "pause-820510", held for 9.407551984s
	I0830 22:06:54.992686  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.992956  990580 main.go:141] libmachine: (pause-820510) Calling .GetIP
	I0830 22:06:54.996069  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.996479  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:54.996510  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.996697  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.997247  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.997422  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.997512  990580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:06:54.997562  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:54.997639  990580 ssh_runner.go:195] Run: cat /version.json
	I0830 22:06:54.997656  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:55.000331  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.000570  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.000731  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:55.000790  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.000998  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:55.001213  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:55.001283  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:55.001301  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:55.001335  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.001453  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:55.001471  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:55.001594  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:55.001677  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:55.001718  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:55.163700  990580 ssh_runner.go:195] Run: systemctl --version
	I0830 22:06:55.177978  990580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:06:54.960236  990141 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 22:06:55.021758  990141 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 22:06:55.021892  990141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:06:55.246378  990141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:06:55.604213  990141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:06:55.733311  990141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:06:56.065326  990141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:06:56.066225  990141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:06:56.069290  990141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:06:54.995177  990773 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0830 22:06:54.995415  990773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:06:54.995483  990773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:06:55.014418  990773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0830 22:06:55.014940  990773 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:06:55.015562  990773 main.go:141] libmachine: Using API Version  1
	I0830 22:06:55.015586  990773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:06:55.015966  990773 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:06:55.016143  990773 main.go:141] libmachine: (force-systemd-flag-882278) Calling .GetMachineName
	I0830 22:06:55.016290  990773 main.go:141] libmachine: (force-systemd-flag-882278) Calling .DriverName
	I0830 22:06:55.016498  990773 start.go:159] libmachine.API.Create for "force-systemd-flag-882278" (driver="kvm2")
	I0830 22:06:55.016534  990773 client.go:168] LocalClient.Create starting
	I0830 22:06:55.016570  990773 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem
	I0830 22:06:55.016615  990773 main.go:141] libmachine: Decoding PEM data...
	I0830 22:06:55.016637  990773 main.go:141] libmachine: Parsing certificate...
	I0830 22:06:55.016711  990773 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem
	I0830 22:06:55.016739  990773 main.go:141] libmachine: Decoding PEM data...
	I0830 22:06:55.016762  990773 main.go:141] libmachine: Parsing certificate...
	I0830 22:06:55.016791  990773 main.go:141] libmachine: Running pre-create checks...
	I0830 22:06:55.016805  990773 main.go:141] libmachine: (force-systemd-flag-882278) Calling .PreCreateCheck
	I0830 22:06:55.017214  990773 main.go:141] libmachine: (force-systemd-flag-882278) Calling .GetConfigRaw
	I0830 22:06:55.017712  990773 main.go:141] libmachine: Creating machine...
	I0830 22:06:55.017732  990773 main.go:141] libmachine: (force-systemd-flag-882278) Calling .Create
	I0830 22:06:55.017859  990773 main.go:141] libmachine: (force-systemd-flag-882278) Creating KVM machine...
	I0830 22:06:55.019154  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | found existing default KVM network
	I0830 22:06:55.022308  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:55.021138  990833 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f7d0}
	I0830 22:06:55.027330  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | trying to create private KVM network mk-force-systemd-flag-882278 192.168.39.0/24...
	I0830 22:06:55.113030  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | private KVM network mk-force-systemd-flag-882278 192.168.39.0/24 created
	I0830 22:06:55.113202  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting up store path in /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278 ...
	I0830 22:06:55.113235  990773 main.go:141] libmachine: (force-systemd-flag-882278) Building disk image from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 22:06:55.113250  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:55.113165  990833 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:06:55.113289  990773 main.go:141] libmachine: (force-systemd-flag-882278) Downloading /home/jenkins/minikube-integration/17114-955377/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0830 22:06:55.396018  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:55.395837  990833 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278/id_rsa...
	I0830 22:06:55.539041  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:55.538895  990833 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278/force-systemd-flag-882278.rawdisk...
	I0830 22:06:55.539075  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Writing magic tar header
	I0830 22:06:55.539112  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Writing SSH key tar header
	I0830 22:06:55.539133  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:55.539090  990833 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278 ...
	I0830 22:06:55.539311  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278 (perms=drwx------)
	I0830 22:06:55.539334  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines (perms=drwxr-xr-x)
	I0830 22:06:55.539348  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube (perms=drwxr-xr-x)
	I0830 22:06:55.539358  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377 (perms=drwxrwxr-x)
	I0830 22:06:55.539370  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0830 22:06:55.539380  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0830 22:06:55.539391  990773 main.go:141] libmachine: (force-systemd-flag-882278) Creating domain...
	I0830 22:06:55.539415  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278
	I0830 22:06:55.539425  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines
	I0830 22:06:55.539437  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:06:55.539450  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377
	I0830 22:06:55.539461  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0830 22:06:55.539471  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins
	I0830 22:06:55.539481  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home
	I0830 22:06:55.539491  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Skipping /home - not owner
	I0830 22:06:55.541549  990773 main.go:141] libmachine: (force-systemd-flag-882278) define libvirt domain using xml: 
	I0830 22:06:55.541575  990773 main.go:141] libmachine: (force-systemd-flag-882278) <domain type='kvm'>
	I0830 22:06:55.541594  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <name>force-systemd-flag-882278</name>
	I0830 22:06:55.541612  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <memory unit='MiB'>2048</memory>
	I0830 22:06:55.541646  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <vcpu>2</vcpu>
	I0830 22:06:55.541664  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <features>
	I0830 22:06:55.541694  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <acpi/>
	I0830 22:06:55.541707  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <apic/>
	I0830 22:06:55.541718  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <pae/>
	I0830 22:06:55.541730  990773 main.go:141] libmachine: (force-systemd-flag-882278)     
	I0830 22:06:55.541743  990773 main.go:141] libmachine: (force-systemd-flag-882278)   </features>
	I0830 22:06:55.541756  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <cpu mode='host-passthrough'>
	I0830 22:06:55.541769  990773 main.go:141] libmachine: (force-systemd-flag-882278)   
	I0830 22:06:55.541779  990773 main.go:141] libmachine: (force-systemd-flag-882278)   </cpu>
	I0830 22:06:55.541790  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <os>
	I0830 22:06:55.541809  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <type>hvm</type>
	I0830 22:06:55.541823  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <boot dev='cdrom'/>
	I0830 22:06:55.541837  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <boot dev='hd'/>
	I0830 22:06:55.541852  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <bootmenu enable='no'/>
	I0830 22:06:55.541863  990773 main.go:141] libmachine: (force-systemd-flag-882278)   </os>
	I0830 22:06:55.541876  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <devices>
	I0830 22:06:55.541888  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <disk type='file' device='cdrom'>
	I0830 22:06:55.541904  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278/boot2docker.iso'/>
	I0830 22:06:55.541918  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <target dev='hdc' bus='scsi'/>
	I0830 22:06:55.541932  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <readonly/>
	I0830 22:06:55.541945  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </disk>
	I0830 22:06:55.541960  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <disk type='file' device='disk'>
	I0830 22:06:55.541974  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0830 22:06:55.541989  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278/force-systemd-flag-882278.rawdisk'/>
	I0830 22:06:55.542001  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <target dev='hda' bus='virtio'/>
	I0830 22:06:55.542018  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </disk>
	I0830 22:06:55.542031  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <interface type='network'>
	I0830 22:06:55.542045  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <source network='mk-force-systemd-flag-882278'/>
	I0830 22:06:55.542058  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <model type='virtio'/>
	I0830 22:06:55.542071  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </interface>
	I0830 22:06:55.542080  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <interface type='network'>
	I0830 22:06:55.542090  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <source network='default'/>
	I0830 22:06:55.542103  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <model type='virtio'/>
	I0830 22:06:55.542114  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </interface>
	I0830 22:06:55.542126  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <serial type='pty'>
	I0830 22:06:55.542140  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <target port='0'/>
	I0830 22:06:55.542156  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </serial>
	I0830 22:06:55.542172  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <console type='pty'>
	I0830 22:06:55.542182  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <target type='serial' port='0'/>
	I0830 22:06:55.542202  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </console>
	I0830 22:06:55.542214  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <rng model='virtio'>
	I0830 22:06:55.542229  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <backend model='random'>/dev/random</backend>
	I0830 22:06:55.542241  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </rng>
	I0830 22:06:55.542254  990773 main.go:141] libmachine: (force-systemd-flag-882278)     
	I0830 22:06:55.542262  990773 main.go:141] libmachine: (force-systemd-flag-882278)     
	I0830 22:06:55.542276  990773 main.go:141] libmachine: (force-systemd-flag-882278)   </devices>
	I0830 22:06:55.542288  990773 main.go:141] libmachine: (force-systemd-flag-882278) </domain>
	I0830 22:06:55.542306  990773 main.go:141] libmachine: (force-systemd-flag-882278) 
	I0830 22:06:55.629482  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:de:8a:14 in network default
	I0830 22:06:55.630182  990773 main.go:141] libmachine: (force-systemd-flag-882278) Ensuring networks are active...
	I0830 22:06:55.630212  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:55.631044  990773 main.go:141] libmachine: (force-systemd-flag-882278) Ensuring network default is active
	I0830 22:06:55.631363  990773 main.go:141] libmachine: (force-systemd-flag-882278) Ensuring network mk-force-systemd-flag-882278 is active
	I0830 22:06:55.631990  990773 main.go:141] libmachine: (force-systemd-flag-882278) Getting domain xml...
	I0830 22:06:55.632807  990773 main.go:141] libmachine: (force-systemd-flag-882278) Creating domain...
	I0830 22:06:57.026310  990773 main.go:141] libmachine: (force-systemd-flag-882278) Waiting to get IP...
	I0830 22:06:57.027245  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:57.027741  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:57.027813  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:57.027734  990833 retry.go:31] will retry after 215.225269ms: waiting for machine to come up
	I0830 22:06:57.244152  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:57.244724  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:57.244753  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:57.244690  990833 retry.go:31] will retry after 387.579873ms: waiting for machine to come up
	I0830 22:06:57.634209  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:57.634776  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:57.634804  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:57.634725  990833 retry.go:31] will retry after 346.434842ms: waiting for machine to come up
	I0830 22:06:57.983503  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:57.984087  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:57.984125  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:57.984030  990833 retry.go:31] will retry after 569.109205ms: waiting for machine to come up
	I0830 22:06:58.554714  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:58.555236  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:58.555262  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:58.555176  990833 retry.go:31] will retry after 631.47767ms: waiting for machine to come up
	I0830 22:06:59.188133  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:59.188603  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:59.188638  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:59.188536  990833 retry.go:31] will retry after 618.085766ms: waiting for machine to come up
	I0830 22:06:56.070848  990141 out.go:204]   - Booting up control plane ...
	I0830 22:06:56.071018  990141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:06:56.071141  990141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:06:56.071551  990141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:06:56.095950  990141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:06:56.098662  990141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:06:56.098831  990141 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:06:56.241394  990141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:06:55.986097  990580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:06:56.015156  990580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:06:56.015282  990580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:06:56.109625  990580 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0830 22:06:56.109661  990580 start.go:466] detecting cgroup driver to use...
	I0830 22:06:56.109803  990580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:06:56.146125  990580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:06:56.170972  990580 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:06:56.171051  990580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:06:56.242531  990580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:06:56.275716  990580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:06:56.581802  990580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:06:56.815136  990580 docker.go:212] disabling docker service ...
	I0830 22:06:56.815243  990580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:06:56.836976  990580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:06:56.851767  990580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:06:57.098143  990580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:06:57.347551  990580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:06:57.371482  990580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:06:57.440051  990580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:06:57.440164  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.470626  990580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:06:57.470717  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.498416  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.527036  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.555430  990580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:06:57.583763  990580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:06:57.605280  990580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:06:57.638115  990580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:06:57.978882  990580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:06:59.808094  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:59.808512  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:59.808545  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:59.808448  990833 retry.go:31] will retry after 720.710014ms: waiting for machine to come up
	I0830 22:07:00.530748  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:07:00.531257  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:07:00.531288  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:07:00.531205  990833 retry.go:31] will retry after 1.482403978s: waiting for machine to come up
	I0830 22:07:02.015218  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:07:02.015737  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:07:02.015781  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:07:02.015672  990833 retry.go:31] will retry after 1.803287858s: waiting for machine to come up
	I0830 22:07:03.820912  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:07:03.821341  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:07:03.821371  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:07:03.821283  990833 retry.go:31] will retry after 1.673310877s: waiting for machine to come up
	I0830 22:07:04.243898  990141 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005121 seconds
	I0830 22:07:04.244079  990141 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:07:04.262984  990141 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:07:04.796216  990141 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:07:04.796479  990141 kubeadm.go:322] [mark-control-plane] Marking the node cert-expiration-693390 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 22:07:05.314517  990141 kubeadm.go:322] [bootstrap-token] Using token: 3pliu7.2ck2z7k2h029781o
	I0830 22:07:05.316183  990141 out.go:204]   - Configuring RBAC rules ...
	I0830 22:07:05.316336  990141 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:07:05.328304  990141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 22:07:05.337220  990141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:07:05.341663  990141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:07:05.345964  990141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:07:05.349559  990141 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:07:05.369027  990141 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 22:07:05.678202  990141 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:07:05.736460  990141 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:07:05.737804  990141 kubeadm.go:322] 
	I0830 22:07:05.737882  990141 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:07:05.737889  990141 kubeadm.go:322] 
	I0830 22:07:05.737981  990141 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:07:05.737987  990141 kubeadm.go:322] 
	I0830 22:07:05.738018  990141 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:07:05.738090  990141 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:07:05.738153  990141 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:07:05.738165  990141 kubeadm.go:322] 
	I0830 22:07:05.738240  990141 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 22:07:05.738246  990141 kubeadm.go:322] 
	I0830 22:07:05.738311  990141 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 22:07:05.738316  990141 kubeadm.go:322] 
	I0830 22:07:05.738383  990141 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:07:05.738482  990141 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:07:05.738571  990141 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:07:05.738579  990141 kubeadm.go:322] 
	I0830 22:07:05.738673  990141 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 22:07:05.738762  990141 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:07:05.738767  990141 kubeadm.go:322] 
	I0830 22:07:05.738867  990141 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3pliu7.2ck2z7k2h029781o \
	I0830 22:07:05.738994  990141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:07:05.739019  990141 kubeadm.go:322] 	--control-plane 
	I0830 22:07:05.739024  990141 kubeadm.go:322] 
	I0830 22:07:05.739129  990141 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:07:05.739133  990141 kubeadm.go:322] 
	I0830 22:07:05.739243  990141 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3pliu7.2ck2z7k2h029781o \
	I0830 22:07:05.739362  990141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:07:05.739651  990141 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:07:05.739676  990141 cni.go:84] Creating CNI manager for ""
	I0830 22:07:05.739699  990141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:07:05.741786  990141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:07:05.743497  990141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:07:05.785087  990141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:07:05.811073  990141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:07:05.811162  990141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:07:05.811165  990141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=cert-expiration-693390 minikube.k8s.io/updated_at=2023_08_30T22_07_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:07:05.882282  990141 ops.go:34] apiserver oom_adj: -16
	I0830 22:07:06.178742  990141 kubeadm.go:1081] duration metric: took 367.657958ms to wait for elevateKubeSystemPrivileges.
	I0830 22:07:06.215562  990141 kubeadm.go:406] StartCluster complete in 13.432094638s
	I0830 22:07:06.215597  990141 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:07:06.215698  990141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:07:06.217096  990141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:07:06.217354  990141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:07:06.217669  990141 config.go:182] Loaded profile config "cert-expiration-693390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:07:06.217799  990141 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:07:06.217889  990141 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-693390"
	I0830 22:07:06.217908  990141 addons.go:231] Setting addon storage-provisioner=true in "cert-expiration-693390"
	I0830 22:07:06.217909  990141 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-693390"
	I0830 22:07:06.217923  990141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-693390"
	I0830 22:07:06.217969  990141 host.go:66] Checking if "cert-expiration-693390" exists ...
	I0830 22:07:06.218426  990141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:06.218455  990141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:06.218486  990141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:06.218507  990141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:06.234781  990141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I0830 22:07:06.235519  990141 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:06.236167  990141 main.go:141] libmachine: Using API Version  1
	I0830 22:07:06.236181  990141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:06.237341  990141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41185
	I0830 22:07:06.237631  990141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:06.237783  990141 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:06.237817  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetState
	I0830 22:07:06.238335  990141 main.go:141] libmachine: Using API Version  1
	I0830 22:07:06.238354  990141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:06.238729  990141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:06.239340  990141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:06.239375  990141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:06.248773  990141 addons.go:231] Setting addon default-storageclass=true in "cert-expiration-693390"
	I0830 22:07:06.248807  990141 host.go:66] Checking if "cert-expiration-693390" exists ...
	I0830 22:07:06.249172  990141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:06.249207  990141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:06.262024  990141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33431
	I0830 22:07:06.262544  990141 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:06.263067  990141 main.go:141] libmachine: Using API Version  1
	I0830 22:07:06.263083  990141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:06.263515  990141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:06.263722  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetState
	I0830 22:07:06.266002  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:07:06.267895  990141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:07:06.267897  990141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I0830 22:07:06.268537  990141 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:06.269398  990141 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:07:06.269409  990141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:07:06.269428  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:07:06.269982  990141 main.go:141] libmachine: Using API Version  1
	I0830 22:07:06.269995  990141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:06.270425  990141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:06.270529  990141 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-693390" context rescaled to 1 replicas
	I0830 22:07:06.270563  990141 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.85 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:07:06.273579  990141 out.go:177] * Verifying Kubernetes components...
	I0830 22:07:06.271210  990141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:06.272878  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:07:06.273545  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:07:06.275060  990141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:06.275098  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:07:06.275120  990141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:07:06.275122  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:07:06.275268  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:07:06.275462  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:07:06.275682  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:07:06.291301  990141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0830 22:07:06.292398  990141 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:06.292990  990141 main.go:141] libmachine: Using API Version  1
	I0830 22:07:06.293006  990141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:06.293414  990141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:06.293627  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetState
	I0830 22:07:06.295365  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:07:06.296032  990141 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:07:06.296040  990141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:07:06.296059  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:07:06.298874  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:07:06.299274  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:07:06.299294  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:07:06.299453  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:07:06.299599  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:07:06.299733  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:07:06.299853  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:07:06.461045  990141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:07:06.462006  990141 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:07:06.462069  990141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:07:06.479901  990141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:07:06.498609  990141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:07:07.957879  990141 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.496801075s)
	I0830 22:07:07.957913  990141 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0830 22:07:07.957943  990141 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.495851407s)
	I0830 22:07:07.957969  990141 api_server.go:72] duration metric: took 1.687277939s to wait for apiserver process to appear ...
	I0830 22:07:07.957976  990141 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:07:07.958010  990141 api_server.go:253] Checking apiserver healthz at https://192.168.61.85:8443/healthz ...
	I0830 22:07:07.965169  990141 api_server.go:279] https://192.168.61.85:8443/healthz returned 200:
	ok
	I0830 22:07:07.966458  990141 api_server.go:141] control plane version: v1.28.1
	I0830 22:07:07.966475  990141 api_server.go:131] duration metric: took 8.493138ms to wait for apiserver health ...
	I0830 22:07:07.966484  990141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:07:07.974645  990141 system_pods.go:59] 4 kube-system pods found
	I0830 22:07:07.974686  990141 system_pods.go:61] "etcd-cert-expiration-693390" [99d540bd-1ad3-487a-9d5d-410301332a18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:07:07.974699  990141 system_pods.go:61] "kube-apiserver-cert-expiration-693390" [19267299-1891-43d1-b254-ee086437fdba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:07:07.974708  990141 system_pods.go:61] "kube-controller-manager-cert-expiration-693390" [d9e46c35-3658-48ad-b8e6-4301239203fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:07:07.974720  990141 system_pods.go:61] "kube-scheduler-cert-expiration-693390" [1a3e0c22-0607-4fc0-955f-37877ec44350] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:07:07.974727  990141 system_pods.go:74] duration metric: took 8.237179ms to wait for pod list to return data ...
	I0830 22:07:07.974738  990141 kubeadm.go:581] duration metric: took 1.704046607s to wait for : map[apiserver:true system_pods:true] ...
	I0830 22:07:07.974753  990141 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:07:07.978422  990141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:07:07.978442  990141 node_conditions.go:123] node cpu capacity is 2
	I0830 22:07:07.978453  990141 node_conditions.go:105] duration metric: took 3.696615ms to run NodePressure ...
	I0830 22:07:07.978466  990141 start.go:228] waiting for startup goroutines ...
	I0830 22:07:08.251744  990141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.771809023s)
	I0830 22:07:08.251808  990141 main.go:141] libmachine: Making call to close driver server
	I0830 22:07:08.251821  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .Close
	I0830 22:07:08.253968  990141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.755335385s)
	I0830 22:07:08.254016  990141 main.go:141] libmachine: Making call to close driver server
	I0830 22:07:08.254027  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .Close
	I0830 22:07:08.254430  990141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:07:08.254442  990141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:07:08.254451  990141 main.go:141] libmachine: Making call to close driver server
	I0830 22:07:08.254460  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .Close
	I0830 22:07:08.254599  990141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:07:08.254610  990141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:07:08.254610  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Closing plugin on server side
	I0830 22:07:08.254618  990141 main.go:141] libmachine: Making call to close driver server
	I0830 22:07:08.254627  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .Close
	I0830 22:07:08.255284  990141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:07:08.255285  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Closing plugin on server side
	I0830 22:07:08.255294  990141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:07:08.255307  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Closing plugin on server side
	I0830 22:07:08.255334  990141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:07:08.255342  990141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:07:08.255445  990141 main.go:141] libmachine: Making call to close driver server
	I0830 22:07:08.255454  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .Close
	I0830 22:07:08.255882  990141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:07:08.255913  990141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:07:08.257841  990141 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0830 22:07:08.259256  990141 addons.go:502] enable addons completed in 2.041462583s: enabled=[storage-provisioner default-storageclass]
	I0830 22:07:08.259293  990141 start.go:233] waiting for cluster config update ...
	I0830 22:07:08.259307  990141 start.go:242] writing updated cluster config ...
	I0830 22:07:08.259634  990141 ssh_runner.go:195] Run: rm -f paused
	I0830 22:07:08.337974  990141 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:07:08.340208  990141 out.go:177] * Done! kubectl is now configured to use "cert-expiration-693390" cluster and "default" namespace by default
	I0830 22:07:07.412243  990580 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.433311903s)
	I0830 22:07:07.412282  990580 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:07:07.412346  990580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:07:07.419936  990580 start.go:534] Will wait 60s for crictl version
	I0830 22:07:07.420003  990580 ssh_runner.go:195] Run: which crictl
	I0830 22:07:07.425713  990580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:07:07.681641  990580 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:07:07.681755  990580 ssh_runner.go:195] Run: crio --version
	I0830 22:07:08.259976  990580 ssh_runner.go:195] Run: crio --version
	I0830 22:07:08.363331  990580 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:07:05.496237  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:07:05.496781  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:07:05.496811  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:07:05.496692  990833 retry.go:31] will retry after 2.403018753s: waiting for machine to come up
	I0830 22:07:07.901374  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:07:07.901988  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:07:07.902016  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:07:07.901900  990833 retry.go:31] will retry after 2.875611012s: waiting for machine to come up
	I0830 22:07:08.364997  990580 main.go:141] libmachine: (pause-820510) Calling .GetIP
	I0830 22:07:08.368430  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:07:08.368857  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:07:08.368893  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:07:08.369113  990580 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0830 22:07:08.378998  990580 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:07:08.379077  990580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:07:08.444088  990580 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:07:08.444115  990580 crio.go:415] Images already preloaded, skipping extraction
	I0830 22:07:08.444179  990580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:07:08.497435  990580 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:07:08.497464  990580 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:07:08.497564  990580 ssh_runner.go:195] Run: crio config
	I0830 22:07:08.612207  990580 cni.go:84] Creating CNI manager for ""
	I0830 22:07:08.612239  990580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:07:08.612267  990580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:07:08.612295  990580 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.94 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-820510 NodeName:pause-820510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:07:08.612513  990580 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-820510"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:07:08.612616  990580 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-820510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-820510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:07:08.612690  990580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:07:08.632244  990580 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:07:08.632339  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:07:08.652720  990580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0830 22:07:08.688698  990580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:07:08.726622  990580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0830 22:07:08.758923  990580 ssh_runner.go:195] Run: grep 192.168.72.94	control-plane.minikube.internal$ /etc/hosts
	I0830 22:07:08.770902  990580 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510 for IP: 192.168.72.94
	I0830 22:07:08.770952  990580 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:07:08.771139  990580 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:07:08.771204  990580 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:07:08.771295  990580 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/client.key
	I0830 22:07:08.771394  990580 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/apiserver.key.90452619
	I0830 22:07:08.771460  990580 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/proxy-client.key
	I0830 22:07:08.771611  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:07:08.771647  990580 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:07:08.771662  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:07:08.771695  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:07:08.771730  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:07:08.771764  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:07:08.771837  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:07:08.772744  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:07:08.818726  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:07:08.862923  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:07:08.911406  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:07:08.999894  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:07:09.058431  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:07:09.119865  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:07:09.190391  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:07:09.241029  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:07:09.293842  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:07:09.338532  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:07:09.393923  990580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:07:09.420541  990580 ssh_runner.go:195] Run: openssl version
	I0830 22:07:09.432288  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:07:09.451569  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:07:09.461237  990580 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:07:09.461322  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:07:09.472810  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:07:09.488771  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:07:09.507791  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:07:09.516037  990580 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:07:09.516106  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:07:09.523501  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:07:09.545329  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:07:09.565996  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:07:09.575701  990580 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:07:09.575768  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:07:09.588786  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:07:09.614023  990580 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:07:09.624374  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:07:09.636450  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:07:09.649864  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:07:09.661854  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:07:09.672722  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:07:09.686022  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:07:09.700215  990580 kubeadm.go:404] StartCluster: {Name:pause-820510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.1 ClusterName:pause-820510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:fa
lse pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:07:09.700375  990580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:07:09.700430  990580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:07:09.756078  990580 cri.go:89] found id: "1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3"
	I0830 22:07:09.756108  990580 cri.go:89] found id: "ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976"
	I0830 22:07:09.756115  990580 cri.go:89] found id: "aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	I0830 22:07:09.756121  990580 cri.go:89] found id: "b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495"
	I0830 22:07:09.756126  990580 cri.go:89] found id: "bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79"
	I0830 22:07:09.756133  990580 cri.go:89] found id: "9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639"
	I0830 22:07:09.756139  990580 cri.go:89] found id: ""
	I0830 22:07:09.756200  990580 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:05:44 UTC, ends at Wed 2023-08-30 22:07:30 UTC. --
	Aug 30 22:07:29 pause-820510 crio[2630]: time="2023-08-30 22:07:29.751389555Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jrqc4,Uid:5084572f-86f8-4338-82d1-f3df68aae5fd,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1693433227923371818,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T22:06:31.449717254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&PodSandboxMetadata{Name:etcd-pause-820510,Uid:77b9c1525043120cb9292cc4b0ac27eb,Namespace:kube-system,Attempt:2,
},State:SANDBOX_READY,CreatedAt:1693433227830913406,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.94:2379,kubernetes.io/config.hash: 77b9c1525043120cb9292cc4b0ac27eb,kubernetes.io/config.seen: 2023-08-30T22:06:16.598296798Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&PodSandboxMetadata{Name:kube-proxy-zjl5m,Uid:61114403-040d-4f67-a7c0-91232c7b499e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1693433227819540829,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61114403-040d-4f67-a7c0-91232c7b499e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T22:06:29.692361324Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-820510,Uid:317907ee69b8984088b017f8ff46a7db,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1693433227785005983,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 317907ee69b8984088b017f8ff46a7db,kubernetes.io/config.seen: 2023-08-30T22:06:16.598292963Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711
334,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-820510,Uid:78721dadef96167f7ab96108b4edc786,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1693433227718852811,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.94:8443,kubernetes.io/config.hash: 78721dadef96167f7ab96108b4edc786,kubernetes.io/config.seen: 2023-08-30T22:06:16.598297981Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-820510,Uid:3d65dfa120c1febbd8341def54b8b82d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1693433227547879022,Labels:map[string]strin
g{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3d65dfa120c1febbd8341def54b8b82d,kubernetes.io/config.seen: 2023-08-30T22:06:16.598299187Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jrqc4,Uid:5084572f-86f8-4338-82d1-f3df68aae5fd,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1693433214410845466,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/c
onfig.seen: 2023-08-30T22:06:31.449717254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-820510,Uid:78721dadef96167f7ab96108b4edc786,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1693433214405493239,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.94:8443,kubernetes.io/config.hash: 78721dadef96167f7ab96108b4edc786,kubernetes.io/config.seen: 2023-08-30T22:06:16.598297981Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&PodSandboxMetadata{Name:etcd-p
ause-820510,Uid:77b9c1525043120cb9292cc4b0ac27eb,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1693433214346780190,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.94:2379,kubernetes.io/config.hash: 77b9c1525043120cb9292cc4b0ac27eb,kubernetes.io/config.seen: 2023-08-30T22:06:16.598296798Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-820510,Uid:3d65dfa120c1febbd8341def54b8b82d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1693433214333304384,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3d65dfa120c1febbd8341def54b8b82d,kubernetes.io/config.seen: 2023-08-30T22:06:16.598299187Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-820510,Uid:317907ee69b8984088b017f8ff46a7db,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1693433214280834513,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 317907ee69b8984088b017f8ff46a7db,kubernetes.io/config.seen: 2023-08
-30T22:06:16.598292963Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&PodSandboxMetadata{Name:kube-proxy-zjl5m,Uid:61114403-040d-4f67-a7c0-91232c7b499e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1693433214242977447,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T22:06:29.692361324Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=96059154-4636-4840-b332-161c322e08b4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 30 22:07:29 pause-820510 crio[2630]: time="2023-08-30 22:07:29.752284341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=719c65a1-7eea-47c9-a428-1336b920e278 name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:07:29 pause-820510 crio[2630]: time="2023-08-30 22:07:29.752337156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=719c65a1-7eea-47c9-a428-1336b920e278 name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:07:29 pause-820510 crio[2630]: time="2023-08-30 22:07:29.752571505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=719c65a1-7eea-47c9-a428-1336b920e278 name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.177996789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f1e8cf02-121f-4132-b203-e787fc594c03 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.178062346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f1e8cf02-121f-4132-b203-e787fc594c03 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.178369398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f1e8cf02-121f-4132-b203-e787fc594c03 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.224413819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fd103cd1-9e9e-402d-95cc-2b76f681c570 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.224522377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fd103cd1-9e9e-402d-95cc-2b76f681c570 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.224935400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fd103cd1-9e9e-402d-95cc-2b76f681c570 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.278235298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=863093aa-d8ef-4241-abfe-b3c07cb9f537 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.278311401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=863093aa-d8ef-4241-abfe-b3c07cb9f537 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.278545702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=863093aa-d8ef-4241-abfe-b3c07cb9f537 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.316868717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8f41b45e-1b48-411e-8f38-2be2fba3d647 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.316997679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8f41b45e-1b48-411e-8f38-2be2fba3d647 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.317299473Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8f41b45e-1b48-411e-8f38-2be2fba3d647 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.359514731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a91dd7a7-9fe6-4b09-9f81-ca2363cc177b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.359599931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a91dd7a7-9fe6-4b09-9f81-ca2363cc177b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.359926936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a91dd7a7-9fe6-4b09-9f81-ca2363cc177b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.399488313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=81244bef-8fab-469b-89f7-ebd23f36b145 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.399609718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=81244bef-8fab-469b-89f7-ebd23f36b145 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.399895085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=81244bef-8fab-469b-89f7-ebd23f36b145 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.442236965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9dc57699-4a59-4a04-9276-d49a17b46faf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.442329451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9dc57699-4a59-4a04-9276-d49a17b46faf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:30 pause-820510 crio[2630]: time="2023-08-30 22:07:30.442595151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9dc57699-4a59-4a04-9276-d49a17b46faf name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	e41116aefc915       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   4 seconds ago       Running             coredns                   2                   e204ca9921388
	28bf8feea07b5       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   17 seconds ago      Running             kube-proxy                2                   91a124e79aae7
	7ec8f19135e9f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   20 seconds ago      Running             etcd                      2                   1699ee9242fbf
	cb65566f89a00       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   20 seconds ago      Running             kube-scheduler            2                   ed4dae876a04e
	c94ec52e9f036       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   21 seconds ago      Running             kube-controller-manager   2                   c256e1c726594
	7ed7b37ccdf00       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   21 seconds ago      Running             kube-apiserver            2                   f8635a571bc37
	1e39fba900da8       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   34 seconds ago      Exited              kube-proxy                1                   f53410c5d6ef5
	ed97a3cf66164       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   34 seconds ago      Exited              kube-scheduler            1                   42406a0245bd6
	aa0b2dfde6334       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   34 seconds ago      Exited              coredns                   1                   b0522f8d8e0e7
	b18e5122a3fc5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   34 seconds ago      Exited              etcd                      1                   3e4ceb4ffd2a7
	bc3c56300a5df       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   34 seconds ago      Exited              kube-controller-manager   1                   a9b199e4e5f23
	9f63a7000a547       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   34 seconds ago      Exited              kube-apiserver            1                   79cf121efdbd2
	
	* 
	* ==> coredns [aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50013 - 47724 "HINFO IN 6454075429887002369.3773186301198531786. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010363644s
	
	* 
	* ==> coredns [e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37414 - 31176 "HINFO IN 3074781893109531955.2640077726230566319. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010880168s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-820510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-820510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=pause-820510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T22_06_16_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:06:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-820510
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 22:07:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:06:37 +0000   Wed, 30 Aug 2023 22:06:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:06:37 +0000   Wed, 30 Aug 2023 22:06:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:06:37 +0000   Wed, 30 Aug 2023 22:06:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:06:37 +0000   Wed, 30 Aug 2023 22:06:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.94
	  Hostname:    pause-820510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 444dc489bdc444a0b27cb9d4b4ae41c8
	  System UUID:                444dc489-bdc4-44a0-b27c-b9d4b4ae41c8
	  Boot ID:                    b43e6b75-2fc1-43af-9087-5889db85ed24
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jrqc4                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     61s
	  kube-system                 etcd-pause-820510                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-pause-820510             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-pause-820510    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-zjl5m                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-pause-820510             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientMemory  83s (x9 over 83s)  kubelet          Node pause-820510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x7 over 83s)  kubelet          Node pause-820510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node pause-820510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s                kubelet          Node pause-820510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s                kubelet          Node pause-820510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s                kubelet          Node pause-820510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                73s                kubelet          Node pause-820510 status is now: NodeReady
	  Normal  RegisteredNode           61s                node-controller  Node pause-820510 event: Registered Node pause-820510 in Controller
	  Normal  RegisteredNode           2s                 node-controller  Node pause-820510 event: Registered Node pause-820510 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug30 22:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071197] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.527404] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.743481] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.178441] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.149921] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.992732] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.114317] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.148335] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.112392] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.195727] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[Aug30 22:06] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +9.791437] systemd-fstab-generator[1258]: Ignoring "noauto" for root device
	[ +38.018069] kauditd_printk_skb: 23 callbacks suppressed
	[  +2.132716] systemd-fstab-generator[2390]: Ignoring "noauto" for root device
	[  +0.242500] systemd-fstab-generator[2419]: Ignoring "noauto" for root device
	[  +0.272413] systemd-fstab-generator[2436]: Ignoring "noauto" for root device
	[  +0.262948] systemd-fstab-generator[2447]: Ignoring "noauto" for root device
	[  +0.586845] systemd-fstab-generator[2487]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b] <==
	* {"level":"warn","ts":"2023-08-30T22:07:28.647564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.236104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-08-30T22:07:28.648315Z","caller":"traceutil/trace.go:171","msg":"trace[895176764] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:505; }","duration":"306.987022ms","start":"2023-08-30T22:07:28.341316Z","end":"2023-08-30T22:07:28.648303Z","steps":["trace[895176764] 'agreement among raft nodes before linearized reading'  (duration: 306.137378ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:28.648376Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.3413Z","time spent":"307.065018ms","remote":"127.0.0.1:35158","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":237,"request content":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" "}
	{"level":"info","ts":"2023-08-30T22:07:28.647373Z","caller":"traceutil/trace.go:171","msg":"trace[1793542339] linearizableReadLoop","detail":"{readStateIndex:526; appliedIndex:525; }","duration":"305.879169ms","start":"2023-08-30T22:07:28.341341Z","end":"2023-08-30T22:07:28.64722Z","steps":["trace[1793542339] 'read index received'  (duration: 203.207738ms)","trace[1793542339] 'applied index is now lower than readState.Index'  (duration: 102.670215ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T22:07:29.295649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"352.170699ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11493619767956879919 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" mod_revision:403 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-08-30T22:07:29.295848Z","caller":"traceutil/trace.go:171","msg":"trace[369982309] linearizableReadLoop","detail":"{readStateIndex:528; appliedIndex:526; }","duration":"472.645706ms","start":"2023-08-30T22:07:28.823191Z","end":"2023-08-30T22:07:29.295837Z","steps":["trace[369982309] 'read index received'  (duration: 120.242499ms)","trace[369982309] 'applied index is now lower than readState.Index'  (duration: 352.402589ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-30T22:07:29.296178Z","caller":"traceutil/trace.go:171","msg":"trace[2144013270] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"629.944383ms","start":"2023-08-30T22:07:28.666222Z","end":"2023-08-30T22:07:29.296167Z","steps":["trace[2144013270] 'process raft request'  (duration: 277.203038ms)","trace[2144013270] 'compare'  (duration: 352.071287ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T22:07:29.296239Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.666178Z","time spent":"630.025523ms","remote":"127.0.0.1:35174","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1298,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" mod_revision:403 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" > >"}
	{"level":"info","ts":"2023-08-30T22:07:29.296378Z","caller":"traceutil/trace.go:171","msg":"trace[1842352685] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"628.633729ms","start":"2023-08-30T22:07:28.667739Z","end":"2023-08-30T22:07:29.296373Z","steps":["trace[1842352685] 'process raft request'  (duration: 628.019602ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.296408Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.667726Z","time spent":"628.665256ms","remote":"127.0.0.1:35150","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:402 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"warn","ts":"2023-08-30T22:07:29.296489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"473.396558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5426"}
	{"level":"info","ts":"2023-08-30T22:07:29.296506Z","caller":"traceutil/trace.go:171","msg":"trace[1616670351] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:507; }","duration":"473.413715ms","start":"2023-08-30T22:07:28.823087Z","end":"2023-08-30T22:07:29.296501Z","steps":["trace[1616670351] 'agreement among raft nodes before linearized reading'  (duration: 473.377161ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.296519Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.823068Z","time spent":"473.447395ms","remote":"127.0.0.1:35152","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":5449,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2023-08-30T22:07:29.296673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"434.113655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2895"}
	{"level":"info","ts":"2023-08-30T22:07:29.2967Z","caller":"traceutil/trace.go:171","msg":"trace[601172364] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:507; }","duration":"434.143377ms","start":"2023-08-30T22:07:28.862549Z","end":"2023-08-30T22:07:29.296692Z","steps":["trace[601172364] 'agreement among raft nodes before linearized reading'  (duration: 434.086801ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.296722Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.862533Z","time spent":"434.182797ms","remote":"127.0.0.1:35218","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":2918,"request content":"key:\"/registry/daemonsets/kube-system/kube-proxy\" "}
	{"level":"warn","ts":"2023-08-30T22:07:29.298501Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"435.506969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2023-08-30T22:07:29.298564Z","caller":"traceutil/trace.go:171","msg":"trace[205033553] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:507; }","duration":"435.574269ms","start":"2023-08-30T22:07:28.862982Z","end":"2023-08-30T22:07:29.298557Z","steps":["trace[205033553] 'agreement among raft nodes before linearized reading'  (duration: 435.442944ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.298618Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.862974Z","time spent":"435.635549ms","remote":"127.0.0.1:35214","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4156,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2023-08-30T22:07:29.299217Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"435.725588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/kube-dns\" ","response":"range_response_count:1 size:1211"}
	{"level":"info","ts":"2023-08-30T22:07:29.299318Z","caller":"traceutil/trace.go:171","msg":"trace[1440488534] range","detail":"{range_begin:/registry/services/specs/kube-system/kube-dns; range_end:; response_count:1; response_revision:507; }","duration":"435.83196ms","start":"2023-08-30T22:07:28.863479Z","end":"2023-08-30T22:07:29.299311Z","steps":["trace[1440488534] 'agreement among raft nodes before linearized reading'  (duration: 433.431195ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.299341Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.86347Z","time spent":"435.864387ms","remote":"127.0.0.1:35156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":1234,"request content":"key:\"/registry/services/specs/kube-system/kube-dns\" "}
	{"level":"warn","ts":"2023-08-30T22:07:29.299903Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"437.2361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-820510\" ","response":"range_response_count:1 size:676"}
	{"level":"info","ts":"2023-08-30T22:07:29.300007Z","caller":"traceutil/trace.go:171","msg":"trace[1878773192] range","detail":"{range_begin:/registry/csinodes/pause-820510; range_end:; response_count:1; response_revision:507; }","duration":"437.343148ms","start":"2023-08-30T22:07:28.862657Z","end":"2023-08-30T22:07:29.3Z","steps":["trace[1878773192] 'agreement among raft nodes before linearized reading'  (duration: 436.069091ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.300049Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.862653Z","time spent":"437.388159ms","remote":"127.0.0.1:35200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":699,"request content":"key:\"/registry/csinodes/pause-820510\" "}
	
	* 
	* ==> etcd [b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495] <==
	* 
	* 
	* ==> kernel <==
	*  22:07:30 up 1 min,  0 users,  load average: 1.74, 0.65, 0.24
	Linux pause-820510 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c] <==
	* I0830 22:07:15.148955       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0830 22:07:15.149223       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0830 22:07:15.149958       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0830 22:07:15.151808       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0830 22:07:15.159078       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0830 22:07:15.159226       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0830 22:07:15.956217       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0830 22:07:28.134806       1 trace.go:236] Trace[346750859]: "Get" accept:application/json, */*,audit-id:49e669cb-3689-4a50-b8fa-bb76ad1aed63,client:192.168.72.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-pause-820510,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (30-Aug-2023 22:07:27.563) (total time: 571ms):
	Trace[346750859]: ---"About to write a response" 570ms (22:07:28.134)
	Trace[346750859]: [571.306514ms] [571.306514ms] END
	I0830 22:07:28.138237       1 trace.go:236] Trace[470804109]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:193167bc-441c-4786-b68f-f202b808802b,client:192.168.72.94,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-pause-820510/status,user-agent:kubelet/v1.28.1 (linux/amd64) kubernetes/8dc49c4,verb:PATCH (30-Aug-2023 22:07:27.536) (total time: 601ms):
	Trace[470804109]: ["GuaranteedUpdate etcd3" audit-id:193167bc-441c-4786-b68f-f202b808802b,key:/pods/kube-system/etcd-pause-820510,type:*core.Pod,resource:pods 601ms (22:07:27.536)
	Trace[470804109]:  ---"Txn call completed" 594ms (22:07:28.134)]
	Trace[470804109]: ---"Object stored in database" 595ms (22:07:28.134)
	Trace[470804109]: [601.364626ms] [601.364626ms] END
	I0830 22:07:28.664423       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0830 22:07:28.666517       1 controller.go:624] quota admission added evaluator for: endpoints
	I0830 22:07:29.301568       1 trace.go:236] Trace[1334724778]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:5708b4bb-2cfb-489b-9b5f-ebf567f457c3,client:192.168.72.94,protocol:HTTP/2.0,resource:endpointslices,scope:resource,url:/apis/discovery.k8s.io/v1/namespaces/kube-system/endpointslices/kube-dns-9zk9x,user-agent:kube-controller-manager/v1.28.1 (linux/amd64) kubernetes/8dc49c4/system:serviceaccount:kube-system:endpointslice-controller,verb:PUT (30-Aug-2023 22:07:28.662) (total time: 639ms):
	Trace[1334724778]: ["GuaranteedUpdate etcd3" audit-id:5708b4bb-2cfb-489b-9b5f-ebf567f457c3,key:/endpointslices/kube-system/kube-dns-9zk9x,type:*discovery.EndpointSlice,resource:endpointslices.discovery.k8s.io 639ms (22:07:28.662)
	Trace[1334724778]:  ---"Txn call completed" 636ms (22:07:29.301)]
	Trace[1334724778]: [639.461143ms] [639.461143ms] END
	I0830 22:07:29.301802       1 trace.go:236] Trace[1539064073]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:47d720a2-0ac0-4c1a-938f-71fe601ecfbf,client:192.168.72.94,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/kube-dns,user-agent:kube-controller-manager/v1.28.1 (linux/amd64) kubernetes/8dc49c4/system:serviceaccount:kube-system:endpoint-controller,verb:PUT (30-Aug-2023 22:07:28.664) (total time: 636ms):
	Trace[1539064073]: ["GuaranteedUpdate etcd3" audit-id:47d720a2-0ac0-4c1a-938f-71fe601ecfbf,key:/services/endpoints/kube-system/kube-dns,type:*core.Endpoints,resource:endpoints 636ms (22:07:28.665)
	Trace[1539064073]:  ---"Txn call completed" 635ms (22:07:29.301)]
	Trace[1539064073]: [636.921542ms] [636.921542ms] END
	
	* 
	* ==> kube-apiserver [9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639] <==
	* 
	* 
	* ==> kube-controller-manager [bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79] <==
	* 
	* 
	* ==> kube-controller-manager [c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c] <==
	* I0830 22:07:28.315349       1 shared_informer.go:318] Caches are synced for service account
	I0830 22:07:28.319703       1 shared_informer.go:318] Caches are synced for daemon sets
	I0830 22:07:28.324955       1 shared_informer.go:318] Caches are synced for taint
	I0830 22:07:28.325087       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0830 22:07:28.325337       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-820510"
	I0830 22:07:28.325441       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0830 22:07:28.325536       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0830 22:07:28.325847       1 taint_manager.go:211] "Sending events to api server"
	I0830 22:07:28.325992       1 event.go:307] "Event occurred" object="pause-820510" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-820510 event: Registered Node pause-820510 in Controller"
	I0830 22:07:28.337809       1 shared_informer.go:318] Caches are synced for ephemeral
	I0830 22:07:28.337889       1 shared_informer.go:318] Caches are synced for stateful set
	I0830 22:07:28.337900       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0830 22:07:28.337907       1 shared_informer.go:318] Caches are synced for TTL
	I0830 22:07:28.340252       1 shared_informer.go:318] Caches are synced for GC
	I0830 22:07:28.345708       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0830 22:07:28.354363       1 shared_informer.go:318] Caches are synced for HPA
	I0830 22:07:28.356795       1 shared_informer.go:318] Caches are synced for attach detach
	I0830 22:07:28.358468       1 shared_informer.go:318] Caches are synced for disruption
	I0830 22:07:28.393222       1 shared_informer.go:318] Caches are synced for crt configmap
	I0830 22:07:28.397553       1 shared_informer.go:318] Caches are synced for resource quota
	I0830 22:07:28.403088       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0830 22:07:28.483641       1 shared_informer.go:318] Caches are synced for resource quota
	I0830 22:07:28.854450       1 shared_informer.go:318] Caches are synced for garbage collector
	I0830 22:07:28.854636       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0830 22:07:28.858912       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3] <==
	* 
	* 
	* ==> kube-proxy [28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0] <==
	* I0830 22:07:12.977931       1 server_others.go:69] "Using iptables proxy"
	I0830 22:07:15.111645       1 node.go:141] Successfully retrieved node IP: 192.168.72.94
	I0830 22:07:15.353190       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0830 22:07:15.353348       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0830 22:07:15.358866       1 server_others.go:152] "Using iptables Proxier"
	I0830 22:07:15.359082       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 22:07:15.359529       1 server.go:846] "Version info" version="v1.28.1"
	I0830 22:07:15.359566       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:07:15.361253       1 config.go:315] "Starting node config controller"
	I0830 22:07:15.361290       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 22:07:15.362338       1 config.go:188] "Starting service config controller"
	I0830 22:07:15.362596       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 22:07:15.362712       1 config.go:97] "Starting endpoint slice config controller"
	I0830 22:07:15.362719       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 22:07:15.461741       1 shared_informer.go:318] Caches are synced for node config
	I0830 22:07:15.464646       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 22:07:15.464660       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b] <==
	* I0830 22:07:12.212532       1 serving.go:348] Generated self-signed cert in-memory
	W0830 22:07:15.055207       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0830 22:07:15.055444       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 22:07:15.055458       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0830 22:07:15.055471       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0830 22:07:15.119783       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0830 22:07:15.119834       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:07:15.125832       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0830 22:07:15.126025       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0830 22:07:15.126048       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 22:07:15.126077       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0830 22:07:15.226835       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976] <==
	* 
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:05:44 UTC, ends at Wed 2023-08-30 22:07:31 UTC. --
	Aug 30 22:07:07 pause-820510 kubelet[1265]: I0830 22:07:07.589559    1265 status_manager.go:853] "Failed to get status for pod" podUID="78721dadef96167f7ab96108b4edc786" pod="kube-system/kube-apiserver-pause-820510" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-820510\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: I0830 22:07:07.590297    1265 status_manager.go:853] "Failed to get status for pod" podUID="61114403-040d-4f67-a7c0-91232c7b499e" pod="kube-system/kube-proxy-zjl5m" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjl5m\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.669889    1265 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-820510\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-820510?resourceVersion=0&timeout=10s\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.670394    1265 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-820510\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-820510?timeout=10s\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.670760    1265 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-820510\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-820510?timeout=10s\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.671071    1265 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-820510\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-820510?timeout=10s\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.671611    1265 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-820510\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-820510?timeout=10s\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.671667    1265 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: I0830 22:07:07.698381    1265 scope.go:117] "RemoveContainer" containerID="a06b4dab9d461a996e90c7378e63b3034a632f7cac47bc307602ca476ac85ddf"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: I0830 22:07:07.916559    1265 scope.go:117] "RemoveContainer" containerID="7acf9b92ae62ee58a768f304cc7ca0e1ac940575001c7b631c1281ac5e87fe2b"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: I0830 22:07:07.974653    1265 scope.go:117] "RemoveContainer" containerID="08aeb861e5e608ed884fc3aeac04b271ccb2f019e1a43c288186f1feb79a118c"
	Aug 30 22:07:08 pause-820510 kubelet[1265]: I0830 22:07:08.040189    1265 scope.go:117] "RemoveContainer" containerID="a370e2b1dd5d2db3bc0c30c527d2ef75988ef1e82017e4afdb1aa2196b9c28a8"
	Aug 30 22:07:08 pause-820510 kubelet[1265]: I0830 22:07:08.133361    1265 scope.go:117] "RemoveContainer" containerID="fd28398dff7f2fd66df1ce09fe5ee6d665425eaba45e6e4865b7737b9bc3cbf8"
	Aug 30 22:07:11 pause-820510 kubelet[1265]: E0830 22:07:11.264814    1265 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-jrqc4_kube-system(5084572f-86f8-4338-82d1-f3df68aae5fd)\"" pod="kube-system/coredns-5dd5756b68-jrqc4" podUID="5084572f-86f8-4338-82d1-f3df68aae5fd"
	Aug 30 22:07:11 pause-820510 kubelet[1265]: I0830 22:07:11.634569    1265 scope.go:117] "RemoveContainer" containerID="aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	Aug 30 22:07:11 pause-820510 kubelet[1265]: E0830 22:07:11.636086    1265 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-jrqc4_kube-system(5084572f-86f8-4338-82d1-f3df68aae5fd)\"" pod="kube-system/coredns-5dd5756b68-jrqc4" podUID="5084572f-86f8-4338-82d1-f3df68aae5fd"
	Aug 30 22:07:12 pause-820510 kubelet[1265]: I0830 22:07:12.647609    1265 scope.go:117] "RemoveContainer" containerID="aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	Aug 30 22:07:12 pause-820510 kubelet[1265]: E0830 22:07:12.647970    1265 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-jrqc4_kube-system(5084572f-86f8-4338-82d1-f3df68aae5fd)\"" pod="kube-system/coredns-5dd5756b68-jrqc4" podUID="5084572f-86f8-4338-82d1-f3df68aae5fd"
	Aug 30 22:07:13 pause-820510 kubelet[1265]: I0830 22:07:13.656916    1265 scope.go:117] "RemoveContainer" containerID="aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	Aug 30 22:07:13 pause-820510 kubelet[1265]: E0830 22:07:13.657237    1265 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-jrqc4_kube-system(5084572f-86f8-4338-82d1-f3df68aae5fd)\"" pod="kube-system/coredns-5dd5756b68-jrqc4" podUID="5084572f-86f8-4338-82d1-f3df68aae5fd"
	Aug 30 22:07:16 pause-820510 kubelet[1265]: E0830 22:07:16.847972    1265 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:07:16 pause-820510 kubelet[1265]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:07:16 pause-820510 kubelet[1265]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:07:16 pause-820510 kubelet[1265]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:07:25 pause-820510 kubelet[1265]: I0830 22:07:25.716398    1265 scope.go:117] "RemoveContainer" containerID="aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:07:30.016788  991216 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17114-955377/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-820510 -n pause-820510
helpers_test.go:261: (dbg) Run:  kubectl --context pause-820510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-820510 -n pause-820510
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-820510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-820510 logs -n 25: (1.383543307s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo docker                         | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo cat                            | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo                                | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo find                           | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-051361 sudo crio                           | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-051361                                     | cilium-051361             | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	| start   | -p cert-expiration-693390                            | cert-expiration-693390    | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:07 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-820510                                      | pause-820510              | jenkins | v1.31.2 | 30 Aug 23 22:06 UTC | 30 Aug 23 22:07 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-134135                          | force-systemd-env-134135  | jenkins | v1.31.2 | 30 Aug 23 22:06 UTC | 30 Aug 23 22:06 UTC |
	| start   | -p force-systemd-flag-882278                         | force-systemd-flag-882278 | jenkins | v1.31.2 | 30 Aug 23 22:06 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:06:44
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:06:44.280266  990773 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:06:44.280421  990773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:06:44.280431  990773 out.go:309] Setting ErrFile to fd 2...
	I0830 22:06:44.280439  990773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:06:44.280756  990773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:06:44.281463  990773 out.go:303] Setting JSON to false
	I0830 22:06:44.282779  990773 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13751,"bootTime":1693419453,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:06:44.282866  990773 start.go:138] virtualization: kvm guest
	I0830 22:06:44.286187  990773 out.go:177] * [force-systemd-flag-882278] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:06:44.288210  990773 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:06:44.289760  990773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:06:44.288305  990773 notify.go:220] Checking for updates...
	I0830 22:06:44.292277  990773 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:06:44.293740  990773 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:06:44.295073  990773 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:06:44.296424  990773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:06:44.298290  990773 config.go:182] Loaded profile config "cert-expiration-693390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:44.298505  990773 config.go:182] Loaded profile config "pause-820510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:44.298590  990773 config.go:182] Loaded profile config "stopped-upgrade-184733": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0830 22:06:44.298728  990773 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:06:44.336203  990773 out.go:177] * Using the kvm2 driver based on user configuration
	I0830 22:06:44.337754  990773 start.go:298] selected driver: kvm2
	I0830 22:06:44.337772  990773 start.go:902] validating driver "kvm2" against <nil>
	I0830 22:06:44.337786  990773 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:06:44.338633  990773 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:06:44.338708  990773 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:06:44.353848  990773 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:06:44.353888  990773 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 22:06:44.354081  990773 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0830 22:06:44.354110  990773 cni.go:84] Creating CNI manager for ""
	I0830 22:06:44.354119  990773 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:06:44.354127  990773 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0830 22:06:44.354133  990773 start_flags.go:319] config:
	{Name:force-systemd-flag-882278 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-882278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:06:44.354267  990773 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:06:44.356107  990773 out.go:177] * Starting control plane node force-systemd-flag-882278 in cluster force-systemd-flag-882278
	I0830 22:06:43.308654  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:43.309028  990141 main.go:141] libmachine: (cert-expiration-693390) Found IP for machine: 192.168.61.85
	I0830 22:06:43.309042  990141 main.go:141] libmachine: (cert-expiration-693390) Reserving static IP address...
	I0830 22:06:43.309057  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has current primary IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:43.309345  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | unable to find host DHCP lease matching {name: "cert-expiration-693390", mac: "52:54:00:f5:14:e4", ip: "192.168.61.85"} in network mk-cert-expiration-693390
	I0830 22:06:44.034057  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Getting to WaitForSSH function...
	I0830 22:06:44.034083  990141 main.go:141] libmachine: (cert-expiration-693390) Reserved static IP address: 192.168.61.85
	I0830 22:06:44.034098  990141 main.go:141] libmachine: (cert-expiration-693390) Waiting for SSH to be available...
	I0830 22:06:44.036624  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.036998  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.037021  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.037182  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Using SSH client type: external
	I0830 22:06:44.037203  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa (-rw-------)
	I0830 22:06:44.037246  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.85 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:06:44.037266  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | About to run SSH command:
	I0830 22:06:44.037277  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | exit 0
	I0830 22:06:44.131904  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | SSH cmd err, output: <nil>: 
	I0830 22:06:44.132164  990141 main.go:141] libmachine: (cert-expiration-693390) KVM machine creation complete!
	I0830 22:06:44.132483  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetConfigRaw
	I0830 22:06:44.133066  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:44.133276  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:44.133420  990141 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0830 22:06:44.133433  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetState
	I0830 22:06:44.135015  990141 main.go:141] libmachine: Detecting operating system of created instance...
	I0830 22:06:44.135026  990141 main.go:141] libmachine: Waiting for SSH to be available...
	I0830 22:06:44.135034  990141 main.go:141] libmachine: Getting to WaitForSSH function...
	I0830 22:06:44.135043  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.137601  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.137933  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.137962  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.138121  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.138325  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.138487  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.138613  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.138796  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:44.139219  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:44.139225  990141 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0830 22:06:44.263077  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:06:44.263102  990141 main.go:141] libmachine: Detecting the provisioner...
	I0830 22:06:44.263112  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.266077  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.266386  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.266411  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.266564  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.266729  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.266878  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.266986  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.267177  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:44.267553  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:44.267563  990141 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0830 22:06:44.388718  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0830 22:06:44.388843  990141 main.go:141] libmachine: found compatible host: buildroot
	I0830 22:06:44.388852  990141 main.go:141] libmachine: Provisioning with buildroot...
	I0830 22:06:44.388864  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetMachineName
	I0830 22:06:44.389142  990141 buildroot.go:166] provisioning hostname "cert-expiration-693390"
	I0830 22:06:44.389161  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetMachineName
	I0830 22:06:44.389357  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.392315  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.392712  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.392734  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.392870  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.393083  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.393325  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.393492  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.393610  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:44.393997  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:44.394005  990141 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-693390 && echo "cert-expiration-693390" | sudo tee /etc/hostname
	I0830 22:06:44.520240  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-693390
	
	I0830 22:06:44.520271  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.523000  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.523379  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.523405  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.523571  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.523816  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.524033  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.524175  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.524311  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:44.524920  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:44.524941  990141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-693390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-693390/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-693390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:06:44.649707  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:06:44.649729  990141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:06:44.649756  990141 buildroot.go:174] setting up certificates
	I0830 22:06:44.649766  990141 provision.go:83] configureAuth start
	I0830 22:06:44.649775  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetMachineName
	I0830 22:06:44.650151  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetIP
	I0830 22:06:44.653121  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.653526  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.653562  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.653662  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.656009  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.656336  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.656354  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.656479  990141 provision.go:138] copyHostCerts
	I0830 22:06:44.656551  990141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:06:44.656567  990141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:06:44.656628  990141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:06:44.656733  990141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:06:44.656742  990141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:06:44.656764  990141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:06:44.656804  990141 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:06:44.656806  990141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:06:44.656822  990141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:06:44.656856  990141 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-693390 san=[192.168.61.85 192.168.61.85 localhost 127.0.0.1 minikube cert-expiration-693390]
	I0830 22:06:44.822525  990141 provision.go:172] copyRemoteCerts
	I0830 22:06:44.822574  990141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:06:44.822599  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.825495  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.825790  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.825813  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.825989  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.826213  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.826393  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.826515  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:06:45.585054  990580 start.go:369] acquired machines lock for "pause-820510" in 9.891182s
	I0830 22:06:45.585111  990580 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:06:45.585119  990580 fix.go:54] fixHost starting: 
	I0830 22:06:45.585512  990580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:06:45.585566  990580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:06:45.602470  990580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40119
	I0830 22:06:45.602909  990580 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:06:45.603471  990580 main.go:141] libmachine: Using API Version  1
	I0830 22:06:45.603500  990580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:06:45.603922  990580 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:06:45.604167  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:45.604337  990580 main.go:141] libmachine: (pause-820510) Calling .GetState
	I0830 22:06:45.606011  990580 fix.go:102] recreateIfNeeded on pause-820510: state=Running err=<nil>
	W0830 22:06:45.606028  990580 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:06:45.608283  990580 out.go:177] * Updating the running kvm2 "pause-820510" VM ...
	I0830 22:06:44.913917  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:06:44.938697  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:06:44.961766  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:06:44.983227  990141 provision.go:86] duration metric: configureAuth took 333.446849ms
	I0830 22:06:44.983248  990141 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:06:44.983444  990141 config.go:182] Loaded profile config "cert-expiration-693390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:44.983520  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:44.986306  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.986622  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:44.986659  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:44.986843  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:44.987009  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.987177  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:44.987330  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:44.987476  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:44.987883  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:44.987893  990141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:06:45.330110  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:06:45.330130  990141 main.go:141] libmachine: Checking connection to Docker...
	I0830 22:06:45.330141  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetURL
	I0830 22:06:45.331518  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Using libvirt version 6000000
	I0830 22:06:45.333915  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.334291  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.334323  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.334485  990141 main.go:141] libmachine: Docker is up and running!
	I0830 22:06:45.334492  990141 main.go:141] libmachine: Reticulating splines...
	I0830 22:06:45.334497  990141 client.go:171] LocalClient.Create took 25.141183597s
	I0830 22:06:45.334521  990141 start.go:167] duration metric: libmachine.API.Create for "cert-expiration-693390" took 25.141238644s
	I0830 22:06:45.334530  990141 start.go:300] post-start starting for "cert-expiration-693390" (driver="kvm2")
	I0830 22:06:45.334541  990141 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:06:45.334560  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:45.334842  990141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:06:45.334862  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:45.336922  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.337260  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.337277  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.337430  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:45.337609  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:45.337782  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:45.337929  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:06:45.424687  990141 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:06:45.429031  990141 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:06:45.429049  990141 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:06:45.429105  990141 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:06:45.429191  990141 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:06:45.429298  990141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:06:45.437089  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:06:45.461099  990141 start.go:303] post-start completed in 126.555206ms
	I0830 22:06:45.461141  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetConfigRaw
	I0830 22:06:45.461804  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetIP
	I0830 22:06:45.464488  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.464872  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.464899  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.465110  990141 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/config.json ...
	I0830 22:06:45.465284  990141 start.go:128] duration metric: createHost completed in 25.29517827s
	I0830 22:06:45.465300  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:45.467514  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.467904  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.467920  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.468090  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:45.468286  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:45.468465  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:45.468590  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:45.468778  990141 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:45.469159  990141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.85 22 <nil> <nil>}
	I0830 22:06:45.469164  990141 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:06:45.584905  990141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433205.559243147
	
	I0830 22:06:45.584917  990141 fix.go:206] guest clock: 1693433205.559243147
	I0830 22:06:45.584923  990141 fix.go:219] Guest: 2023-08-30 22:06:45.559243147 +0000 UTC Remote: 2023-08-30 22:06:45.46528951 +0000 UTC m=+50.602366631 (delta=93.953637ms)
	I0830 22:06:45.584941  990141 fix.go:190] guest clock delta is within tolerance: 93.953637ms
	I0830 22:06:45.584945  990141 start.go:83] releasing machines lock for "cert-expiration-693390", held for 25.415017017s
	I0830 22:06:45.584967  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:45.585267  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetIP
	I0830 22:06:45.589991  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.590415  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.590460  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.590568  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:45.591110  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:45.591305  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:06:45.591405  990141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:06:45.591448  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:45.591556  990141 ssh_runner.go:195] Run: cat /version.json
	I0830 22:06:45.591578  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:06:45.594022  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.594343  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.594360  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.594466  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:45.594501  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.594630  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:45.594800  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:45.594898  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:45.594927  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:45.594929  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:06:45.595082  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:06:45.595222  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:06:45.595381  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:06:45.595502  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:06:45.705257  990141 ssh_runner.go:195] Run: systemctl --version
	I0830 22:06:45.710749  990141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:06:45.867834  990141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:06:45.874281  990141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:06:45.874358  990141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:06:45.892492  990141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:06:45.892509  990141 start.go:466] detecting cgroup driver to use...
	I0830 22:06:45.892580  990141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:06:45.910503  990141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:06:45.925507  990141 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:06:45.925561  990141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:06:45.940265  990141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:06:45.955901  990141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:06:46.066390  990141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:06:46.195421  990141 docker.go:212] disabling docker service ...
	I0830 22:06:46.195504  990141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:06:46.209707  990141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:06:46.221607  990141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:06:46.339598  990141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:06:46.456877  990141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:06:46.471592  990141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:06:46.492344  990141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:06:46.492397  990141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:46.502538  990141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:06:46.502586  990141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:46.511626  990141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:46.520818  990141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:46.530070  990141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:06:46.539656  990141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:06:46.547696  990141 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:06:46.547755  990141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:06:46.560736  990141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:06:46.569336  990141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:06:46.669331  990141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:06:46.826132  990141 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:06:46.826232  990141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:06:46.831011  990141 start.go:534] Will wait 60s for crictl version
	I0830 22:06:46.831060  990141 ssh_runner.go:195] Run: which crictl
	I0830 22:06:46.835061  990141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:06:46.867704  990141 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:06:46.867778  990141 ssh_runner.go:195] Run: crio --version
	I0830 22:06:46.912897  990141 ssh_runner.go:195] Run: crio --version
	I0830 22:06:46.966035  990141 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:06:44.357374  990773 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:06:44.357412  990773 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0830 22:06:44.357422  990773 cache.go:57] Caching tarball of preloaded images
	I0830 22:06:44.357497  990773 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:06:44.357507  990773 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 22:06:44.357597  990773 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/force-systemd-flag-882278/config.json ...
	I0830 22:06:44.357613  990773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/force-systemd-flag-882278/config.json: {Name:mk936d9606351e54c6245936e50fb75dfebaa0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:44.357736  990773 start.go:365] acquiring machines lock for force-systemd-flag-882278: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:06:46.967623  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetIP
	I0830 22:06:46.970474  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:46.970782  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:06:46.970805  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:06:46.971009  990141 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0830 22:06:46.975143  990141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:06:46.986860  990141 localpath.go:92] copying /home/jenkins/minikube-integration/17114-955377/.minikube/client.crt -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/client.crt
	I0830 22:06:46.987013  990141 localpath.go:117] copying /home/jenkins/minikube-integration/17114-955377/.minikube/client.key -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/client.key
	I0830 22:06:46.987191  990141 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:06:46.987242  990141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:06:47.015341  990141 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:06:47.015404  990141 ssh_runner.go:195] Run: which lz4
	I0830 22:06:47.019318  990141 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:06:47.023264  990141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:06:47.023294  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 22:06:48.849466  990141 crio.go:444] Took 1.830176 seconds to copy over tarball
	I0830 22:06:48.849524  990141 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:06:45.609755  990580 machine.go:88] provisioning docker machine ...
	I0830 22:06:45.609781  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:45.610019  990580 main.go:141] libmachine: (pause-820510) Calling .GetMachineName
	I0830 22:06:45.610208  990580 buildroot.go:166] provisioning hostname "pause-820510"
	I0830 22:06:45.610247  990580 main.go:141] libmachine: (pause-820510) Calling .GetMachineName
	I0830 22:06:45.610427  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:45.612864  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.613332  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.613366  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.613562  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:45.613747  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.613916  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.614067  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:45.614285  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:45.614720  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:45.614734  990580 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-820510 && echo "pause-820510" | sudo tee /etc/hostname
	I0830 22:06:45.761235  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-820510
	
	I0830 22:06:45.761263  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:45.764410  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.764838  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.764868  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.765095  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:45.765334  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.765531  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:45.765691  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:45.765905  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:45.766539  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:45.766571  990580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-820510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-820510/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-820510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:06:45.894801  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:06:45.894835  990580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:06:45.894861  990580 buildroot.go:174] setting up certificates
	I0830 22:06:45.894873  990580 provision.go:83] configureAuth start
	I0830 22:06:45.894923  990580 main.go:141] libmachine: (pause-820510) Calling .GetMachineName
	I0830 22:06:45.895267  990580 main.go:141] libmachine: (pause-820510) Calling .GetIP
	I0830 22:06:45.898467  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.898864  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.898894  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.899097  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:45.901866  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.902238  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:45.902269  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:45.902457  990580 provision.go:138] copyHostCerts
	I0830 22:06:45.902505  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:06:45.902522  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:06:45.902576  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:06:45.902678  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:06:45.902694  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:06:45.902715  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:06:45.902761  990580 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:06:45.902768  990580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:06:45.902785  990580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:06:45.902823  990580 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.pause-820510 san=[192.168.72.94 192.168.72.94 localhost 127.0.0.1 minikube pause-820510]
	I0830 22:06:46.040935  990580 provision.go:172] copyRemoteCerts
	I0830 22:06:46.041000  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:06:46.041026  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:46.044126  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.044484  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:46.044520  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.044742  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:46.044890  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:46.045076  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:46.045232  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:46.148676  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:06:46.174085  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0830 22:06:46.199141  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:06:46.225569  990580 provision.go:86] duration metric: configureAuth took 330.678788ms
	I0830 22:06:46.225597  990580 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:06:46.225851  990580 config.go:182] Loaded profile config "pause-820510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:46.225968  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:46.229315  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.229785  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:46.229821  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:46.229973  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:46.230151  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:46.230363  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:46.230655  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:46.230866  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:46.231518  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:46.231545  990580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:06:51.716360  990141 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.866812305s)
	I0830 22:06:51.716378  990141 crio.go:451] Took 2.866893 seconds to extract the tarball
	I0830 22:06:51.716389  990141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:06:51.758108  990141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:06:51.880454  990141 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:06:51.880465  990141 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:06:51.880523  990141 ssh_runner.go:195] Run: crio config
	I0830 22:06:51.941934  990141 cni.go:84] Creating CNI manager for ""
	I0830 22:06:51.941947  990141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:06:51.941969  990141 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:06:51.942003  990141 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.85 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-693390 NodeName:cert-expiration-693390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:06:51.942173  990141 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-693390"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.85"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:06:51.942234  990141 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=cert-expiration-693390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:cert-expiration-693390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:06:51.942284  990141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:06:51.952345  990141 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:06:51.952421  990141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:06:51.961700  990141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0830 22:06:51.977545  990141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:06:51.994404  990141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0830 22:06:52.010899  990141 ssh_runner.go:195] Run: grep 192.168.61.85	control-plane.minikube.internal$ /etc/hosts
	I0830 22:06:52.014738  990141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:06:52.026182  990141 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390 for IP: 192.168.61.85
	I0830 22:06:52.026207  990141 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:52.026426  990141 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:06:52.026474  990141 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:06:52.026582  990141 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/client.key
	I0830 22:06:52.026604  990141 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key.230625e1
	I0830 22:06:52.026624  990141 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt.230625e1 with IP's: [192.168.61.85 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 22:06:52.170288  990141 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt.230625e1 ...
	I0830 22:06:52.170308  990141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt.230625e1: {Name:mka0c3818d2ac1dfff963b14a0e3d08ae46e9b22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:52.170503  990141 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key.230625e1 ...
	I0830 22:06:52.170514  990141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key.230625e1: {Name:mkd09917b21ea61e8da5a121404b3d8f775e9118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:52.170579  990141 certs.go:337] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt.230625e1 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt
	I0830 22:06:52.170630  990141 certs.go:341] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key.230625e1 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key
	I0830 22:06:52.170674  990141 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.key
	I0830 22:06:52.170683  990141 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.crt with IP's: []
	I0830 22:06:52.407395  990141 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.crt ...
	I0830 22:06:52.407413  990141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.crt: {Name:mk479d34b53aafd5d58997625c425792b53320da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:52.416804  990141 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.key ...
	I0830 22:06:52.416826  990141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.key: {Name:mk16c661e834a055c2ec5a63de9ff8e87ed06581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:06:52.417054  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:06:52.417102  990141 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:06:52.417113  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:06:52.417140  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:06:52.417170  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:06:52.417194  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:06:52.417248  990141 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:06:52.418043  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:06:52.442772  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:06:52.465083  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:06:52.486921  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/cert-expiration-693390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0830 22:06:52.508040  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:06:52.529546  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:06:52.550843  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:06:52.572823  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:06:52.596139  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:06:52.617611  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:06:52.639351  990141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:06:52.660678  990141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:06:52.675868  990141 ssh_runner.go:195] Run: openssl version
	I0830 22:06:52.681359  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:06:52.692582  990141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:06:52.697345  990141 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:06:52.697393  990141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:06:52.703179  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:06:52.714922  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:06:52.726606  990141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:06:52.731441  990141 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:06:52.731492  990141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:06:52.737190  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:06:52.747567  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:06:52.758269  990141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:06:52.762786  990141 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:06:52.762846  990141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:06:52.768268  990141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:06:52.779185  990141 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:06:52.783411  990141 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 22:06:52.783458  990141 kubeadm.go:404] StartCluster: {Name:cert-expiration-693390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.1 ClusterName:cert-expiration-693390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.85 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:06:52.783549  990141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:06:52.783596  990141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:06:52.822575  990141 cri.go:89] found id: ""
	I0830 22:06:52.822635  990141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:06:52.835565  990141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:06:52.848333  990141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:06:52.860305  990141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:06:52.860345  990141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 22:06:52.974126  990141 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 22:06:52.974250  990141 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:06:53.249122  990141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:06:53.249239  990141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:06:53.249335  990141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:06:53.442120  990141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:06:53.622059  990141 out.go:204]   - Generating certificates and keys ...
	I0830 22:06:53.622242  990141 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:06:53.622351  990141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:06:53.673430  990141 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 22:06:53.760222  990141 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 22:06:53.995408  990141 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 22:06:54.079659  990141 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 22:06:54.411095  990141 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 22:06:54.411612  990141 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-693390 localhost] and IPs [192.168.61.85 127.0.0.1 ::1]
	I0830 22:06:54.667920  990141 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 22:06:54.668467  990141 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-693390 localhost] and IPs [192.168.61.85 127.0.0.1 ::1]
	I0830 22:06:54.854520  990141 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 22:06:54.992756  990773 start.go:369] acquired machines lock for "force-systemd-flag-882278" in 10.634961869s
	I0830 22:06:54.992833  990773 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-882278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-882278 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:06:54.992975  990773 start.go:125] createHost starting for "" (driver="kvm2")
	I0830 22:06:54.161743  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:06:54.161776  990580 machine.go:91] provisioned docker machine in 8.552000473s
	I0830 22:06:54.161790  990580 start.go:300] post-start starting for "pause-820510" (driver="kvm2")
	I0830 22:06:54.161806  990580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:06:54.161829  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.162145  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:06:54.162173  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:54.165200  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.165622  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:54.165653  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.165846  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:54.166034  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:54.166232  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:54.166375  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:54.759254  990580 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:06:54.766926  990580 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:06:54.766956  990580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:06:54.767095  990580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:06:54.767212  990580 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:06:54.767327  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:06:54.788678  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:06:54.817090  990580 start.go:303] post-start completed in 655.283715ms
	I0830 22:06:54.817116  990580 fix.go:56] fixHost completed within 9.231998658s
	I0830 22:06:54.817139  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:54.820125  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.820521  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:54.820557  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.820836  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:54.821024  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:54.821190  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:54.821332  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:54.821500  990580 main.go:141] libmachine: Using SSH client type: native
	I0830 22:06:54.822149  990580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0830 22:06:54.822169  990580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:06:54.992566  990580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433214.989306072
	
	I0830 22:06:54.992595  990580 fix.go:206] guest clock: 1693433214.989306072
	I0830 22:06:54.992605  990580 fix.go:219] Guest: 2023-08-30 22:06:54.989306072 +0000 UTC Remote: 2023-08-30 22:06:54.817120079 +0000 UTC m=+19.323029239 (delta=172.185993ms)
	I0830 22:06:54.992633  990580 fix.go:190] guest clock delta is within tolerance: 172.185993ms
	I0830 22:06:54.992639  990580 start.go:83] releasing machines lock for "pause-820510", held for 9.407551984s
	I0830 22:06:54.992686  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.992956  990580 main.go:141] libmachine: (pause-820510) Calling .GetIP
	I0830 22:06:54.996069  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.996479  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:54.996510  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:54.996697  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.997247  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.997422  990580 main.go:141] libmachine: (pause-820510) Calling .DriverName
	I0830 22:06:54.997512  990580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:06:54.997562  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:54.997639  990580 ssh_runner.go:195] Run: cat /version.json
	I0830 22:06:54.997656  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHHostname
	I0830 22:06:55.000331  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.000570  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.000731  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:55.000790  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.000998  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:55.001213  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:55.001283  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHPort
	I0830 22:06:55.001301  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:06:55.001335  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:06:55.001453  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHKeyPath
	I0830 22:06:55.001471  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:55.001594  990580 main.go:141] libmachine: (pause-820510) Calling .GetSSHUsername
	I0830 22:06:55.001677  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:55.001718  990580 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/pause-820510/id_rsa Username:docker}
	I0830 22:06:55.163700  990580 ssh_runner.go:195] Run: systemctl --version
	I0830 22:06:55.177978  990580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:06:54.960236  990141 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 22:06:55.021758  990141 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 22:06:55.021892  990141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:06:55.246378  990141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:06:55.604213  990141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:06:55.733311  990141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:06:56.065326  990141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:06:56.066225  990141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:06:56.069290  990141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:06:54.995177  990773 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0830 22:06:54.995415  990773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:06:54.995483  990773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:06:55.014418  990773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0830 22:06:55.014940  990773 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:06:55.015562  990773 main.go:141] libmachine: Using API Version  1
	I0830 22:06:55.015586  990773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:06:55.015966  990773 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:06:55.016143  990773 main.go:141] libmachine: (force-systemd-flag-882278) Calling .GetMachineName
	I0830 22:06:55.016290  990773 main.go:141] libmachine: (force-systemd-flag-882278) Calling .DriverName
	I0830 22:06:55.016498  990773 start.go:159] libmachine.API.Create for "force-systemd-flag-882278" (driver="kvm2")
	I0830 22:06:55.016534  990773 client.go:168] LocalClient.Create starting
	I0830 22:06:55.016570  990773 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem
	I0830 22:06:55.016615  990773 main.go:141] libmachine: Decoding PEM data...
	I0830 22:06:55.016637  990773 main.go:141] libmachine: Parsing certificate...
	I0830 22:06:55.016711  990773 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem
	I0830 22:06:55.016739  990773 main.go:141] libmachine: Decoding PEM data...
	I0830 22:06:55.016762  990773 main.go:141] libmachine: Parsing certificate...
	I0830 22:06:55.016791  990773 main.go:141] libmachine: Running pre-create checks...
	I0830 22:06:55.016805  990773 main.go:141] libmachine: (force-systemd-flag-882278) Calling .PreCreateCheck
	I0830 22:06:55.017214  990773 main.go:141] libmachine: (force-systemd-flag-882278) Calling .GetConfigRaw
	I0830 22:06:55.017712  990773 main.go:141] libmachine: Creating machine...
	I0830 22:06:55.017732  990773 main.go:141] libmachine: (force-systemd-flag-882278) Calling .Create
	I0830 22:06:55.017859  990773 main.go:141] libmachine: (force-systemd-flag-882278) Creating KVM machine...
	I0830 22:06:55.019154  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | found existing default KVM network
	I0830 22:06:55.022308  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:55.021138  990833 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f7d0}
	I0830 22:06:55.027330  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | trying to create private KVM network mk-force-systemd-flag-882278 192.168.39.0/24...
	I0830 22:06:55.113030  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | private KVM network mk-force-systemd-flag-882278 192.168.39.0/24 created
	I0830 22:06:55.113202  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting up store path in /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278 ...
	I0830 22:06:55.113235  990773 main.go:141] libmachine: (force-systemd-flag-882278) Building disk image from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 22:06:55.113250  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:55.113165  990833 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:06:55.113289  990773 main.go:141] libmachine: (force-systemd-flag-882278) Downloading /home/jenkins/minikube-integration/17114-955377/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0830 22:06:55.396018  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:55.395837  990833 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278/id_rsa...
	I0830 22:06:55.539041  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:55.538895  990833 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278/force-systemd-flag-882278.rawdisk...
	I0830 22:06:55.539075  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Writing magic tar header
	I0830 22:06:55.539112  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Writing SSH key tar header
	I0830 22:06:55.539133  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:55.539090  990833 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278 ...
	I0830 22:06:55.539311  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278 (perms=drwx------)
	I0830 22:06:55.539334  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines (perms=drwxr-xr-x)
	I0830 22:06:55.539348  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube (perms=drwxr-xr-x)
	I0830 22:06:55.539358  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377 (perms=drwxrwxr-x)
	I0830 22:06:55.539370  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0830 22:06:55.539380  990773 main.go:141] libmachine: (force-systemd-flag-882278) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0830 22:06:55.539391  990773 main.go:141] libmachine: (force-systemd-flag-882278) Creating domain...
	I0830 22:06:55.539415  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278
	I0830 22:06:55.539425  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines
	I0830 22:06:55.539437  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:06:55.539450  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377
	I0830 22:06:55.539461  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0830 22:06:55.539471  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home/jenkins
	I0830 22:06:55.539481  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Checking permissions on dir: /home
	I0830 22:06:55.539491  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | Skipping /home - not owner
	I0830 22:06:55.541549  990773 main.go:141] libmachine: (force-systemd-flag-882278) define libvirt domain using xml: 
	I0830 22:06:55.541575  990773 main.go:141] libmachine: (force-systemd-flag-882278) <domain type='kvm'>
	I0830 22:06:55.541594  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <name>force-systemd-flag-882278</name>
	I0830 22:06:55.541612  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <memory unit='MiB'>2048</memory>
	I0830 22:06:55.541646  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <vcpu>2</vcpu>
	I0830 22:06:55.541664  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <features>
	I0830 22:06:55.541694  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <acpi/>
	I0830 22:06:55.541707  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <apic/>
	I0830 22:06:55.541718  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <pae/>
	I0830 22:06:55.541730  990773 main.go:141] libmachine: (force-systemd-flag-882278)     
	I0830 22:06:55.541743  990773 main.go:141] libmachine: (force-systemd-flag-882278)   </features>
	I0830 22:06:55.541756  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <cpu mode='host-passthrough'>
	I0830 22:06:55.541769  990773 main.go:141] libmachine: (force-systemd-flag-882278)   
	I0830 22:06:55.541779  990773 main.go:141] libmachine: (force-systemd-flag-882278)   </cpu>
	I0830 22:06:55.541790  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <os>
	I0830 22:06:55.541809  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <type>hvm</type>
	I0830 22:06:55.541823  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <boot dev='cdrom'/>
	I0830 22:06:55.541837  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <boot dev='hd'/>
	I0830 22:06:55.541852  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <bootmenu enable='no'/>
	I0830 22:06:55.541863  990773 main.go:141] libmachine: (force-systemd-flag-882278)   </os>
	I0830 22:06:55.541876  990773 main.go:141] libmachine: (force-systemd-flag-882278)   <devices>
	I0830 22:06:55.541888  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <disk type='file' device='cdrom'>
	I0830 22:06:55.541904  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278/boot2docker.iso'/>
	I0830 22:06:55.541918  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <target dev='hdc' bus='scsi'/>
	I0830 22:06:55.541932  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <readonly/>
	I0830 22:06:55.541945  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </disk>
	I0830 22:06:55.541960  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <disk type='file' device='disk'>
	I0830 22:06:55.541974  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0830 22:06:55.541989  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/force-systemd-flag-882278/force-systemd-flag-882278.rawdisk'/>
	I0830 22:06:55.542001  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <target dev='hda' bus='virtio'/>
	I0830 22:06:55.542018  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </disk>
	I0830 22:06:55.542031  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <interface type='network'>
	I0830 22:06:55.542045  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <source network='mk-force-systemd-flag-882278'/>
	I0830 22:06:55.542058  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <model type='virtio'/>
	I0830 22:06:55.542071  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </interface>
	I0830 22:06:55.542080  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <interface type='network'>
	I0830 22:06:55.542090  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <source network='default'/>
	I0830 22:06:55.542103  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <model type='virtio'/>
	I0830 22:06:55.542114  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </interface>
	I0830 22:06:55.542126  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <serial type='pty'>
	I0830 22:06:55.542140  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <target port='0'/>
	I0830 22:06:55.542156  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </serial>
	I0830 22:06:55.542172  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <console type='pty'>
	I0830 22:06:55.542182  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <target type='serial' port='0'/>
	I0830 22:06:55.542202  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </console>
	I0830 22:06:55.542214  990773 main.go:141] libmachine: (force-systemd-flag-882278)     <rng model='virtio'>
	I0830 22:06:55.542229  990773 main.go:141] libmachine: (force-systemd-flag-882278)       <backend model='random'>/dev/random</backend>
	I0830 22:06:55.542241  990773 main.go:141] libmachine: (force-systemd-flag-882278)     </rng>
	I0830 22:06:55.542254  990773 main.go:141] libmachine: (force-systemd-flag-882278)     
	I0830 22:06:55.542262  990773 main.go:141] libmachine: (force-systemd-flag-882278)     
	I0830 22:06:55.542276  990773 main.go:141] libmachine: (force-systemd-flag-882278)   </devices>
	I0830 22:06:55.542288  990773 main.go:141] libmachine: (force-systemd-flag-882278) </domain>
	I0830 22:06:55.542306  990773 main.go:141] libmachine: (force-systemd-flag-882278) 
	I0830 22:06:55.629482  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:de:8a:14 in network default
	I0830 22:06:55.630182  990773 main.go:141] libmachine: (force-systemd-flag-882278) Ensuring networks are active...
	I0830 22:06:55.630212  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:55.631044  990773 main.go:141] libmachine: (force-systemd-flag-882278) Ensuring network default is active
	I0830 22:06:55.631363  990773 main.go:141] libmachine: (force-systemd-flag-882278) Ensuring network mk-force-systemd-flag-882278 is active
	I0830 22:06:55.631990  990773 main.go:141] libmachine: (force-systemd-flag-882278) Getting domain xml...
	I0830 22:06:55.632807  990773 main.go:141] libmachine: (force-systemd-flag-882278) Creating domain...
	I0830 22:06:57.026310  990773 main.go:141] libmachine: (force-systemd-flag-882278) Waiting to get IP...
	I0830 22:06:57.027245  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:57.027741  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:57.027813  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:57.027734  990833 retry.go:31] will retry after 215.225269ms: waiting for machine to come up
	I0830 22:06:57.244152  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:57.244724  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:57.244753  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:57.244690  990833 retry.go:31] will retry after 387.579873ms: waiting for machine to come up
	I0830 22:06:57.634209  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:57.634776  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:57.634804  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:57.634725  990833 retry.go:31] will retry after 346.434842ms: waiting for machine to come up
	I0830 22:06:57.983503  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:57.984087  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:57.984125  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:57.984030  990833 retry.go:31] will retry after 569.109205ms: waiting for machine to come up
	I0830 22:06:58.554714  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:58.555236  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:58.555262  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:58.555176  990833 retry.go:31] will retry after 631.47767ms: waiting for machine to come up
	I0830 22:06:59.188133  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:59.188603  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:59.188638  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:59.188536  990833 retry.go:31] will retry after 618.085766ms: waiting for machine to come up
	I0830 22:06:56.070848  990141 out.go:204]   - Booting up control plane ...
	I0830 22:06:56.071018  990141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:06:56.071141  990141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:06:56.071551  990141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:06:56.095950  990141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:06:56.098662  990141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:06:56.098831  990141 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:06:56.241394  990141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:06:55.986097  990580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:06:56.015156  990580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:06:56.015282  990580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:06:56.109625  990580 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0830 22:06:56.109661  990580 start.go:466] detecting cgroup driver to use...
	I0830 22:06:56.109803  990580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:06:56.146125  990580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:06:56.170972  990580 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:06:56.171051  990580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:06:56.242531  990580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:06:56.275716  990580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:06:56.581802  990580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:06:56.815136  990580 docker.go:212] disabling docker service ...
	I0830 22:06:56.815243  990580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:06:56.836976  990580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:06:56.851767  990580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:06:57.098143  990580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:06:57.347551  990580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:06:57.371482  990580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:06:57.440051  990580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:06:57.440164  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.470626  990580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:06:57.470717  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.498416  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.527036  990580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:06:57.555430  990580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:06:57.583763  990580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:06:57.605280  990580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:06:57.638115  990580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:06:57.978882  990580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:06:59.808094  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:06:59.808512  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:06:59.808545  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:06:59.808448  990833 retry.go:31] will retry after 720.710014ms: waiting for machine to come up
	I0830 22:07:00.530748  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:07:00.531257  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:07:00.531288  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:07:00.531205  990833 retry.go:31] will retry after 1.482403978s: waiting for machine to come up
	I0830 22:07:02.015218  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:07:02.015737  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:07:02.015781  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:07:02.015672  990833 retry.go:31] will retry after 1.803287858s: waiting for machine to come up
	I0830 22:07:03.820912  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:07:03.821341  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:07:03.821371  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:07:03.821283  990833 retry.go:31] will retry after 1.673310877s: waiting for machine to come up
	I0830 22:07:04.243898  990141 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005121 seconds
	I0830 22:07:04.244079  990141 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:07:04.262984  990141 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:07:04.796216  990141 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:07:04.796479  990141 kubeadm.go:322] [mark-control-plane] Marking the node cert-expiration-693390 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 22:07:05.314517  990141 kubeadm.go:322] [bootstrap-token] Using token: 3pliu7.2ck2z7k2h029781o
	I0830 22:07:05.316183  990141 out.go:204]   - Configuring RBAC rules ...
	I0830 22:07:05.316336  990141 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:07:05.328304  990141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 22:07:05.337220  990141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:07:05.341663  990141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:07:05.345964  990141 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:07:05.349559  990141 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:07:05.369027  990141 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 22:07:05.678202  990141 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:07:05.736460  990141 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:07:05.737804  990141 kubeadm.go:322] 
	I0830 22:07:05.737882  990141 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:07:05.737889  990141 kubeadm.go:322] 
	I0830 22:07:05.737981  990141 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:07:05.737987  990141 kubeadm.go:322] 
	I0830 22:07:05.738018  990141 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:07:05.738090  990141 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:07:05.738153  990141 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:07:05.738165  990141 kubeadm.go:322] 
	I0830 22:07:05.738240  990141 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 22:07:05.738246  990141 kubeadm.go:322] 
	I0830 22:07:05.738311  990141 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 22:07:05.738316  990141 kubeadm.go:322] 
	I0830 22:07:05.738383  990141 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:07:05.738482  990141 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:07:05.738571  990141 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:07:05.738579  990141 kubeadm.go:322] 
	I0830 22:07:05.738673  990141 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 22:07:05.738762  990141 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:07:05.738767  990141 kubeadm.go:322] 
	I0830 22:07:05.738867  990141 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3pliu7.2ck2z7k2h029781o \
	I0830 22:07:05.738994  990141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:07:05.739019  990141 kubeadm.go:322] 	--control-plane 
	I0830 22:07:05.739024  990141 kubeadm.go:322] 
	I0830 22:07:05.739129  990141 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:07:05.739133  990141 kubeadm.go:322] 
	I0830 22:07:05.739243  990141 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3pliu7.2ck2z7k2h029781o \
	I0830 22:07:05.739362  990141 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:07:05.739651  990141 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:07:05.739676  990141 cni.go:84] Creating CNI manager for ""
	I0830 22:07:05.739699  990141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:07:05.741786  990141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:07:05.743497  990141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:07:05.785087  990141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:07:05.811073  990141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:07:05.811162  990141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:07:05.811165  990141 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=cert-expiration-693390 minikube.k8s.io/updated_at=2023_08_30T22_07_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:07:05.882282  990141 ops.go:34] apiserver oom_adj: -16
	I0830 22:07:06.178742  990141 kubeadm.go:1081] duration metric: took 367.657958ms to wait for elevateKubeSystemPrivileges.
	I0830 22:07:06.215562  990141 kubeadm.go:406] StartCluster complete in 13.432094638s
	I0830 22:07:06.215597  990141 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:07:06.215698  990141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:07:06.217096  990141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:07:06.217354  990141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:07:06.217669  990141 config.go:182] Loaded profile config "cert-expiration-693390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:07:06.217799  990141 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:07:06.217889  990141 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-693390"
	I0830 22:07:06.217908  990141 addons.go:231] Setting addon storage-provisioner=true in "cert-expiration-693390"
	I0830 22:07:06.217909  990141 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-693390"
	I0830 22:07:06.217923  990141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-693390"
	I0830 22:07:06.217969  990141 host.go:66] Checking if "cert-expiration-693390" exists ...
	I0830 22:07:06.218426  990141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:06.218455  990141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:06.218486  990141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:06.218507  990141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:06.234781  990141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I0830 22:07:06.235519  990141 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:06.236167  990141 main.go:141] libmachine: Using API Version  1
	I0830 22:07:06.236181  990141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:06.237341  990141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41185
	I0830 22:07:06.237631  990141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:06.237783  990141 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:06.237817  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetState
	I0830 22:07:06.238335  990141 main.go:141] libmachine: Using API Version  1
	I0830 22:07:06.238354  990141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:06.238729  990141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:06.239340  990141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:06.239375  990141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:06.248773  990141 addons.go:231] Setting addon default-storageclass=true in "cert-expiration-693390"
	I0830 22:07:06.248807  990141 host.go:66] Checking if "cert-expiration-693390" exists ...
	I0830 22:07:06.249172  990141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:06.249207  990141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:06.262024  990141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33431
	I0830 22:07:06.262544  990141 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:06.263067  990141 main.go:141] libmachine: Using API Version  1
	I0830 22:07:06.263083  990141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:06.263515  990141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:06.263722  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetState
	I0830 22:07:06.266002  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:07:06.267895  990141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:07:06.267897  990141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I0830 22:07:06.268537  990141 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:06.269398  990141 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:07:06.269409  990141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:07:06.269428  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:07:06.269982  990141 main.go:141] libmachine: Using API Version  1
	I0830 22:07:06.269995  990141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:06.270425  990141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:06.270529  990141 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-693390" context rescaled to 1 replicas
	I0830 22:07:06.270563  990141 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.85 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:07:06.273579  990141 out.go:177] * Verifying Kubernetes components...
	I0830 22:07:06.271210  990141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:07:06.272878  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:07:06.273545  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:07:06.275060  990141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:07:06.275098  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:07:06.275120  990141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:07:06.275122  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:07:06.275268  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:07:06.275462  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:07:06.275682  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:07:06.291301  990141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0830 22:07:06.292398  990141 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:07:06.292990  990141 main.go:141] libmachine: Using API Version  1
	I0830 22:07:06.293006  990141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:07:06.293414  990141 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:07:06.293627  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetState
	I0830 22:07:06.295365  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .DriverName
	I0830 22:07:06.296032  990141 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:07:06.296040  990141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:07:06.296059  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHHostname
	I0830 22:07:06.298874  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:07:06.299274  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:14:e4", ip: ""} in network mk-cert-expiration-693390: {Iface:virbr3 ExpiryTime:2023-08-30 23:06:38 +0000 UTC Type:0 Mac:52:54:00:f5:14:e4 Iaid: IPaddr:192.168.61.85 Prefix:24 Hostname:cert-expiration-693390 Clientid:01:52:54:00:f5:14:e4}
	I0830 22:07:06.299294  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | domain cert-expiration-693390 has defined IP address 192.168.61.85 and MAC address 52:54:00:f5:14:e4 in network mk-cert-expiration-693390
	I0830 22:07:06.299453  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHPort
	I0830 22:07:06.299599  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHKeyPath
	I0830 22:07:06.299733  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .GetSSHUsername
	I0830 22:07:06.299853  990141 sshutil.go:53] new ssh client: &{IP:192.168.61.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/cert-expiration-693390/id_rsa Username:docker}
	I0830 22:07:06.461045  990141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:07:06.462006  990141 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:07:06.462069  990141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:07:06.479901  990141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:07:06.498609  990141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:07:07.957879  990141 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.496801075s)
	I0830 22:07:07.957913  990141 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0830 22:07:07.957943  990141 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.495851407s)
	I0830 22:07:07.957969  990141 api_server.go:72] duration metric: took 1.687277939s to wait for apiserver process to appear ...
	I0830 22:07:07.957976  990141 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:07:07.958010  990141 api_server.go:253] Checking apiserver healthz at https://192.168.61.85:8443/healthz ...
	I0830 22:07:07.965169  990141 api_server.go:279] https://192.168.61.85:8443/healthz returned 200:
	ok
	I0830 22:07:07.966458  990141 api_server.go:141] control plane version: v1.28.1
	I0830 22:07:07.966475  990141 api_server.go:131] duration metric: took 8.493138ms to wait for apiserver health ...
	I0830 22:07:07.966484  990141 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:07:07.974645  990141 system_pods.go:59] 4 kube-system pods found
	I0830 22:07:07.974686  990141 system_pods.go:61] "etcd-cert-expiration-693390" [99d540bd-1ad3-487a-9d5d-410301332a18] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:07:07.974699  990141 system_pods.go:61] "kube-apiserver-cert-expiration-693390" [19267299-1891-43d1-b254-ee086437fdba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:07:07.974708  990141 system_pods.go:61] "kube-controller-manager-cert-expiration-693390" [d9e46c35-3658-48ad-b8e6-4301239203fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:07:07.974720  990141 system_pods.go:61] "kube-scheduler-cert-expiration-693390" [1a3e0c22-0607-4fc0-955f-37877ec44350] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:07:07.974727  990141 system_pods.go:74] duration metric: took 8.237179ms to wait for pod list to return data ...
	I0830 22:07:07.974738  990141 kubeadm.go:581] duration metric: took 1.704046607s to wait for : map[apiserver:true system_pods:true] ...
	I0830 22:07:07.974753  990141 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:07:07.978422  990141 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:07:07.978442  990141 node_conditions.go:123] node cpu capacity is 2
	I0830 22:07:07.978453  990141 node_conditions.go:105] duration metric: took 3.696615ms to run NodePressure ...
	I0830 22:07:07.978466  990141 start.go:228] waiting for startup goroutines ...
	I0830 22:07:08.251744  990141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.771809023s)
	I0830 22:07:08.251808  990141 main.go:141] libmachine: Making call to close driver server
	I0830 22:07:08.251821  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .Close
	I0830 22:07:08.253968  990141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.755335385s)
	I0830 22:07:08.254016  990141 main.go:141] libmachine: Making call to close driver server
	I0830 22:07:08.254027  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .Close
	I0830 22:07:08.254430  990141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:07:08.254442  990141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:07:08.254451  990141 main.go:141] libmachine: Making call to close driver server
	I0830 22:07:08.254460  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .Close
	I0830 22:07:08.254599  990141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:07:08.254610  990141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:07:08.254610  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Closing plugin on server side
	I0830 22:07:08.254618  990141 main.go:141] libmachine: Making call to close driver server
	I0830 22:07:08.254627  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .Close
	I0830 22:07:08.255284  990141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:07:08.255285  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Closing plugin on server side
	I0830 22:07:08.255294  990141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:07:08.255307  990141 main.go:141] libmachine: (cert-expiration-693390) DBG | Closing plugin on server side
	I0830 22:07:08.255334  990141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:07:08.255342  990141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:07:08.255445  990141 main.go:141] libmachine: Making call to close driver server
	I0830 22:07:08.255454  990141 main.go:141] libmachine: (cert-expiration-693390) Calling .Close
	I0830 22:07:08.255882  990141 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:07:08.255913  990141 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:07:08.257841  990141 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0830 22:07:08.259256  990141 addons.go:502] enable addons completed in 2.041462583s: enabled=[storage-provisioner default-storageclass]
	I0830 22:07:08.259293  990141 start.go:233] waiting for cluster config update ...
	I0830 22:07:08.259307  990141 start.go:242] writing updated cluster config ...
	I0830 22:07:08.259634  990141 ssh_runner.go:195] Run: rm -f paused
	I0830 22:07:08.337974  990141 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:07:08.340208  990141 out.go:177] * Done! kubectl is now configured to use "cert-expiration-693390" cluster and "default" namespace by default
	I0830 22:07:07.412243  990580 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.433311903s)
	I0830 22:07:07.412282  990580 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:07:07.412346  990580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:07:07.419936  990580 start.go:534] Will wait 60s for crictl version
	I0830 22:07:07.420003  990580 ssh_runner.go:195] Run: which crictl
	I0830 22:07:07.425713  990580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:07:07.681641  990580 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:07:07.681755  990580 ssh_runner.go:195] Run: crio --version
	I0830 22:07:08.259976  990580 ssh_runner.go:195] Run: crio --version
	I0830 22:07:08.363331  990580 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:07:05.496237  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:07:05.496781  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:07:05.496811  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:07:05.496692  990833 retry.go:31] will retry after 2.403018753s: waiting for machine to come up
	I0830 22:07:07.901374  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | domain force-systemd-flag-882278 has defined MAC address 52:54:00:09:a5:69 in network mk-force-systemd-flag-882278
	I0830 22:07:07.901988  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | unable to find current IP address of domain force-systemd-flag-882278 in network mk-force-systemd-flag-882278
	I0830 22:07:07.902016  990773 main.go:141] libmachine: (force-systemd-flag-882278) DBG | I0830 22:07:07.901900  990833 retry.go:31] will retry after 2.875611012s: waiting for machine to come up
	I0830 22:07:08.364997  990580 main.go:141] libmachine: (pause-820510) Calling .GetIP
	I0830 22:07:08.368430  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:07:08.368857  990580 main.go:141] libmachine: (pause-820510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:5b:67", ip: ""} in network mk-pause-820510: {Iface:virbr4 ExpiryTime:2023-08-30 23:05:48 +0000 UTC Type:0 Mac:52:54:00:8d:5b:67 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:pause-820510 Clientid:01:52:54:00:8d:5b:67}
	I0830 22:07:08.368893  990580 main.go:141] libmachine: (pause-820510) DBG | domain pause-820510 has defined IP address 192.168.72.94 and MAC address 52:54:00:8d:5b:67 in network mk-pause-820510
	I0830 22:07:08.369113  990580 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0830 22:07:08.378998  990580 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:07:08.379077  990580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:07:08.444088  990580 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:07:08.444115  990580 crio.go:415] Images already preloaded, skipping extraction
	I0830 22:07:08.444179  990580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:07:08.497435  990580 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:07:08.497464  990580 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:07:08.497564  990580 ssh_runner.go:195] Run: crio config
	I0830 22:07:08.612207  990580 cni.go:84] Creating CNI manager for ""
	I0830 22:07:08.612239  990580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:07:08.612267  990580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:07:08.612295  990580 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.94 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-820510 NodeName:pause-820510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:07:08.612513  990580 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-820510"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:07:08.612616  990580 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-820510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-820510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:07:08.612690  990580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:07:08.632244  990580 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:07:08.632339  990580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:07:08.652720  990580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0830 22:07:08.688698  990580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:07:08.726622  990580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0830 22:07:08.758923  990580 ssh_runner.go:195] Run: grep 192.168.72.94	control-plane.minikube.internal$ /etc/hosts
	I0830 22:07:08.770902  990580 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510 for IP: 192.168.72.94
	I0830 22:07:08.770952  990580 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:07:08.771139  990580 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:07:08.771204  990580 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:07:08.771295  990580 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/client.key
	I0830 22:07:08.771394  990580 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/apiserver.key.90452619
	I0830 22:07:08.771460  990580 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/proxy-client.key
	I0830 22:07:08.771611  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:07:08.771647  990580 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:07:08.771662  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:07:08.771695  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:07:08.771730  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:07:08.771764  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:07:08.771837  990580 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:07:08.772744  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:07:08.818726  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:07:08.862923  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:07:08.911406  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/pause-820510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:07:08.999894  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:07:09.058431  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:07:09.119865  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:07:09.190391  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:07:09.241029  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:07:09.293842  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:07:09.338532  990580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:07:09.393923  990580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:07:09.420541  990580 ssh_runner.go:195] Run: openssl version
	I0830 22:07:09.432288  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:07:09.451569  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:07:09.461237  990580 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:07:09.461322  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:07:09.472810  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:07:09.488771  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:07:09.507791  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:07:09.516037  990580 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:07:09.516106  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:07:09.523501  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:07:09.545329  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:07:09.565996  990580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:07:09.575701  990580 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:07:09.575768  990580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:07:09.588786  990580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:07:09.614023  990580 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:07:09.624374  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:07:09.636450  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:07:09.649864  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:07:09.661854  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:07:09.672722  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:07:09.686022  990580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:07:09.700215  990580 kubeadm.go:404] StartCluster: {Name:pause-820510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.1 ClusterName:pause-820510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:fa
lse pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:07:09.700375  990580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:07:09.700430  990580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:07:09.756078  990580 cri.go:89] found id: "1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3"
	I0830 22:07:09.756108  990580 cri.go:89] found id: "ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976"
	I0830 22:07:09.756115  990580 cri.go:89] found id: "aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	I0830 22:07:09.756121  990580 cri.go:89] found id: "b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495"
	I0830 22:07:09.756126  990580 cri.go:89] found id: "bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79"
	I0830 22:07:09.756133  990580 cri.go:89] found id: "9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639"
	I0830 22:07:09.756139  990580 cri.go:89] found id: ""
	I0830 22:07:09.756200  990580 ssh_runner.go:195] Run: sudo runc list -f json
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:05:44 UTC, ends at Wed 2023-08-30 22:07:32 UTC. --
	Aug 30 22:07:31 pause-820510 crio[2630]: time="2023-08-30 22:07:31.759667459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aa88a84a-1573-45f9-81a4-7b5c73523b44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:07:31 pause-820510 crio[2630]: time="2023-08-30 22:07:31.760224591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aa88a84a-1573-45f9-81a4-7b5c73523b44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.052954693Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=b911d446-097b-4c39-85fc-31bdd68e486d name=/runtime.v1.RuntimeService/Status
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.053072046Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=b911d446-097b-4c39-85fc-31bdd68e486d name=/runtime.v1.RuntimeService/Status
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.245555672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b4f5ef92-d3df-44ac-8345-fb00e9bbf557 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.245646514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b4f5ef92-d3df-44ac-8345-fb00e9bbf557 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.245943709Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b4f5ef92-d3df-44ac-8345-fb00e9bbf557 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.285478977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bea81923-a2e7-4f10-8426-67a58b68e1de name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.285613693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bea81923-a2e7-4f10-8426-67a58b68e1de name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.285888971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bea81923-a2e7-4f10-8426-67a58b68e1de name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.323087228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8621a043-5409-4d4e-a8e4-4ad2abb3c5e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.323259742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8621a043-5409-4d4e-a8e4-4ad2abb3c5e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.323577403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8621a043-5409-4d4e-a8e4-4ad2abb3c5e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.376277597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=34f2c25e-fb68-49a1-b798-dd3ce04141c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.376343886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=34f2c25e-fb68-49a1-b798-dd3ce04141c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.376574491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=34f2c25e-fb68-49a1-b798-dd3ce04141c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.414964932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f4797957-68b5-49cf-9b66-9c139efc324a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.415059762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f4797957-68b5-49cf-9b66-9c139efc324a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.415509847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f4797957-68b5-49cf-9b66-9c139efc324a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.454870993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3a40ea9b-8415-44ac-88bc-a97c2a0c95d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.454932830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3a40ea9b-8415-44ac-88bc-a97c2a0c95d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.455260820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3a40ea9b-8415-44ac-88bc-a97c2a0c95d5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.494411622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c3569d74-54a4-4ab8-9a02-c9de9c9ef472 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.494477046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c3569d74-54a4-4ab8-9a02-c9de9c9ef472 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:07:32 pause-820510 crio[2630]: time="2023-08-30 22:07:32.494716343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463,PodSandboxId:e204ca9921388ca6acf0f53ac45da2fb11851c96b1e914fe528d9150b4f9a4c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693433245755015413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0,PodSandboxId:91a124e79aae75217f8502347677e3e86f0e2bddd7f33a32e501fe06ff455fd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693433232812904409,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b,PodSandboxId:1699ee9242fbf455e09936ed81578a0b211a3b66b7c4113d06c3f0dfdfebe3e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693433230011631907,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2e911247,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b,PodSandboxId:ed4dae876a04e0c5ecbcaa60f3374cde22ca2107ffe8562b1e9dfd53745dc08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693433229912593067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{
io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c,PodSandboxId:c256e1c726594ea55202d06612e1bf5fde4ef3069a105ea05045b3dfbad4ed85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693433229167811906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c,PodSandboxId:f8635a571bc376be0f31dc752e4b456b0fcc5e6e01eb5a39d86e31a649711334,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693433228968457357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3,PodSandboxId:f53410c5d6ef5c23881c13e54e6cc484150046dedefb387640f2465023427c24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,State:CONTAINER_EXITED,CreatedAt:1693433216067466242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zjl5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61114403-040d-4f67-a7c0-91232c7b499e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c3410,io.kubernetes.container.resta
rtCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976,PodSandboxId:42406a0245bd63e25c1eb908dd3415fab8f814c74cd45892292488d5af8f93ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,State:CONTAINER_EXITED,CreatedAt:1693433216051427294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 317907ee69b8984088b017f8ff46a7db,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703,PodSandboxId:b0522f8d8e0e760f22689a4444771a52abceba8237c410045b839a2eb56505d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1693433216031220051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrqc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5084572f-86f8-4338-82d1-f3df68aae5fd,},Annotations:map[string]string{io.kubernetes.container.hash: be94e101,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"T
CP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495,PodSandboxId:3e4ceb4ffd2a77eb25410ae35512ea88072e8a1867f4e424cb2a3cf8f2604449,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1693433215991390498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b9c1525043120cb9292cc4b0ac27eb,},Annotations:map[string]string{io.kubernetes.container.hash: 2e911247,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79,PodSandboxId:a9b199e4e5f23c0b3c84d198623b705b632216b85a64c19cde559da4a05a8d7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1693433215963653319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d65dfa120c1febbd8341def54b8b82d,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639,PodSandboxId:79cf121efdbd2171800a42bc0774e8055f7edd9c4b056444f36602af08dc272b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1693433215882905400,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-820510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78721dadef96167f7ab96108b4edc786,},Annotations:map[string]string{io.kubernetes.container.hash: ac39086a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c3569d74-54a4-4ab8-9a02-c9de9c9ef472 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	e41116aefc915       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   6 seconds ago       Running             coredns                   2                   e204ca9921388
	28bf8feea07b5       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   19 seconds ago      Running             kube-proxy                2                   91a124e79aae7
	7ec8f19135e9f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   22 seconds ago      Running             etcd                      2                   1699ee9242fbf
	cb65566f89a00       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   22 seconds ago      Running             kube-scheduler            2                   ed4dae876a04e
	c94ec52e9f036       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   23 seconds ago      Running             kube-controller-manager   2                   c256e1c726594
	7ed7b37ccdf00       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   23 seconds ago      Running             kube-apiserver            2                   f8635a571bc37
	1e39fba900da8       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   36 seconds ago      Exited              kube-proxy                1                   f53410c5d6ef5
	ed97a3cf66164       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   36 seconds ago      Exited              kube-scheduler            1                   42406a0245bd6
	aa0b2dfde6334       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   36 seconds ago      Exited              coredns                   1                   b0522f8d8e0e7
	b18e5122a3fc5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   36 seconds ago      Exited              etcd                      1                   3e4ceb4ffd2a7
	bc3c56300a5df       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   36 seconds ago      Exited              kube-controller-manager   1                   a9b199e4e5f23
	9f63a7000a547       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   36 seconds ago      Exited              kube-apiserver            1                   79cf121efdbd2
	
	* 
	* ==> coredns [aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50013 - 47724 "HINFO IN 6454075429887002369.3773186301198531786. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010363644s
	
	* 
	* ==> coredns [e41116aefc91524f425a8854508923df25872f3999fcb804ec9f1d653f1d0463] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37414 - 31176 "HINFO IN 3074781893109531955.2640077726230566319. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010880168s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-820510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-820510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=pause-820510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T22_06_16_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:06:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-820510
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 22:07:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:06:37 +0000   Wed, 30 Aug 2023 22:06:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:06:37 +0000   Wed, 30 Aug 2023 22:06:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:06:37 +0000   Wed, 30 Aug 2023 22:06:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:06:37 +0000   Wed, 30 Aug 2023 22:06:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.94
	  Hostname:    pause-820510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 444dc489bdc444a0b27cb9d4b4ae41c8
	  System UUID:                444dc489-bdc4-44a0-b27c-b9d4b4ae41c8
	  Boot ID:                    b43e6b75-2fc1-43af-9087-5889db85ed24
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jrqc4                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     63s
	  kube-system                 etcd-pause-820510                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kube-apiserver-pause-820510             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-pause-820510    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-zjl5m                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-pause-820510             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 61s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientMemory  85s (x9 over 85s)  kubelet          Node pause-820510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x7 over 85s)  kubelet          Node pause-820510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 85s)  kubelet          Node pause-820510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s                kubelet          Node pause-820510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet          Node pause-820510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet          Node pause-820510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                75s                kubelet          Node pause-820510 status is now: NodeReady
	  Normal  RegisteredNode           63s                node-controller  Node pause-820510 event: Registered Node pause-820510 in Controller
	  Normal  RegisteredNode           4s                 node-controller  Node pause-820510 event: Registered Node pause-820510 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug30 22:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071197] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.527404] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.743481] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.178441] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.149921] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.992732] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.114317] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.148335] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.112392] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.195727] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[Aug30 22:06] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +9.791437] systemd-fstab-generator[1258]: Ignoring "noauto" for root device
	[ +38.018069] kauditd_printk_skb: 23 callbacks suppressed
	[  +2.132716] systemd-fstab-generator[2390]: Ignoring "noauto" for root device
	[  +0.242500] systemd-fstab-generator[2419]: Ignoring "noauto" for root device
	[  +0.272413] systemd-fstab-generator[2436]: Ignoring "noauto" for root device
	[  +0.262948] systemd-fstab-generator[2447]: Ignoring "noauto" for root device
	[  +0.586845] systemd-fstab-generator[2487]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [7ec8f19135e9f32dec7fbd4cd4fa2774cd50ee12790db3615bc6a8e50f11a45b] <==
	* {"level":"warn","ts":"2023-08-30T22:07:28.647564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.236104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-08-30T22:07:28.648315Z","caller":"traceutil/trace.go:171","msg":"trace[895176764] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:505; }","duration":"306.987022ms","start":"2023-08-30T22:07:28.341316Z","end":"2023-08-30T22:07:28.648303Z","steps":["trace[895176764] 'agreement among raft nodes before linearized reading'  (duration: 306.137378ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:28.648376Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.3413Z","time spent":"307.065018ms","remote":"127.0.0.1:35158","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":237,"request content":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" "}
	{"level":"info","ts":"2023-08-30T22:07:28.647373Z","caller":"traceutil/trace.go:171","msg":"trace[1793542339] linearizableReadLoop","detail":"{readStateIndex:526; appliedIndex:525; }","duration":"305.879169ms","start":"2023-08-30T22:07:28.341341Z","end":"2023-08-30T22:07:28.64722Z","steps":["trace[1793542339] 'read index received'  (duration: 203.207738ms)","trace[1793542339] 'applied index is now lower than readState.Index'  (duration: 102.670215ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T22:07:29.295649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"352.170699ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11493619767956879919 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" mod_revision:403 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-08-30T22:07:29.295848Z","caller":"traceutil/trace.go:171","msg":"trace[369982309] linearizableReadLoop","detail":"{readStateIndex:528; appliedIndex:526; }","duration":"472.645706ms","start":"2023-08-30T22:07:28.823191Z","end":"2023-08-30T22:07:29.295837Z","steps":["trace[369982309] 'read index received'  (duration: 120.242499ms)","trace[369982309] 'applied index is now lower than readState.Index'  (duration: 352.402589ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-30T22:07:29.296178Z","caller":"traceutil/trace.go:171","msg":"trace[2144013270] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"629.944383ms","start":"2023-08-30T22:07:28.666222Z","end":"2023-08-30T22:07:29.296167Z","steps":["trace[2144013270] 'process raft request'  (duration: 277.203038ms)","trace[2144013270] 'compare'  (duration: 352.071287ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T22:07:29.296239Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.666178Z","time spent":"630.025523ms","remote":"127.0.0.1:35174","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1298,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" mod_revision:403 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-9zk9x\" > >"}
	{"level":"info","ts":"2023-08-30T22:07:29.296378Z","caller":"traceutil/trace.go:171","msg":"trace[1842352685] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"628.633729ms","start":"2023-08-30T22:07:28.667739Z","end":"2023-08-30T22:07:29.296373Z","steps":["trace[1842352685] 'process raft request'  (duration: 628.019602ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.296408Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.667726Z","time spent":"628.665256ms","remote":"127.0.0.1:35150","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:402 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"warn","ts":"2023-08-30T22:07:29.296489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"473.396558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5426"}
	{"level":"info","ts":"2023-08-30T22:07:29.296506Z","caller":"traceutil/trace.go:171","msg":"trace[1616670351] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:507; }","duration":"473.413715ms","start":"2023-08-30T22:07:28.823087Z","end":"2023-08-30T22:07:29.296501Z","steps":["trace[1616670351] 'agreement among raft nodes before linearized reading'  (duration: 473.377161ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.296519Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.823068Z","time spent":"473.447395ms","remote":"127.0.0.1:35152","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":5449,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2023-08-30T22:07:29.296673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"434.113655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2895"}
	{"level":"info","ts":"2023-08-30T22:07:29.2967Z","caller":"traceutil/trace.go:171","msg":"trace[601172364] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:507; }","duration":"434.143377ms","start":"2023-08-30T22:07:28.862549Z","end":"2023-08-30T22:07:29.296692Z","steps":["trace[601172364] 'agreement among raft nodes before linearized reading'  (duration: 434.086801ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.296722Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.862533Z","time spent":"434.182797ms","remote":"127.0.0.1:35218","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":2918,"request content":"key:\"/registry/daemonsets/kube-system/kube-proxy\" "}
	{"level":"warn","ts":"2023-08-30T22:07:29.298501Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"435.506969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2023-08-30T22:07:29.298564Z","caller":"traceutil/trace.go:171","msg":"trace[205033553] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:507; }","duration":"435.574269ms","start":"2023-08-30T22:07:28.862982Z","end":"2023-08-30T22:07:29.298557Z","steps":["trace[205033553] 'agreement among raft nodes before linearized reading'  (duration: 435.442944ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.298618Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.862974Z","time spent":"435.635549ms","remote":"127.0.0.1:35214","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4156,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2023-08-30T22:07:29.299217Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"435.725588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/kube-dns\" ","response":"range_response_count:1 size:1211"}
	{"level":"info","ts":"2023-08-30T22:07:29.299318Z","caller":"traceutil/trace.go:171","msg":"trace[1440488534] range","detail":"{range_begin:/registry/services/specs/kube-system/kube-dns; range_end:; response_count:1; response_revision:507; }","duration":"435.83196ms","start":"2023-08-30T22:07:28.863479Z","end":"2023-08-30T22:07:29.299311Z","steps":["trace[1440488534] 'agreement among raft nodes before linearized reading'  (duration: 433.431195ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.299341Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.86347Z","time spent":"435.864387ms","remote":"127.0.0.1:35156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":1234,"request content":"key:\"/registry/services/specs/kube-system/kube-dns\" "}
	{"level":"warn","ts":"2023-08-30T22:07:29.299903Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"437.2361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-820510\" ","response":"range_response_count:1 size:676"}
	{"level":"info","ts":"2023-08-30T22:07:29.300007Z","caller":"traceutil/trace.go:171","msg":"trace[1878773192] range","detail":"{range_begin:/registry/csinodes/pause-820510; range_end:; response_count:1; response_revision:507; }","duration":"437.343148ms","start":"2023-08-30T22:07:28.862657Z","end":"2023-08-30T22:07:29.3Z","steps":["trace[1878773192] 'agreement among raft nodes before linearized reading'  (duration: 436.069091ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:07:29.300049Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:07:28.862653Z","time spent":"437.388159ms","remote":"127.0.0.1:35200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":699,"request content":"key:\"/registry/csinodes/pause-820510\" "}
	
	* 
	* ==> etcd [b18e5122a3fc57a0d28bdda160ab60a6e77c3b762c3494c3a782d15e9dcdd495] <==
	* 
	* 
	* ==> kernel <==
	*  22:07:32 up 1 min,  0 users,  load average: 1.74, 0.65, 0.24
	Linux pause-820510 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7ed7b37ccdf00759099bbbbc6c93342c34d7fa2783d6edf3704627ce3ab7c01c] <==
	* I0830 22:07:15.148955       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0830 22:07:15.149223       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0830 22:07:15.149958       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0830 22:07:15.151808       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0830 22:07:15.159078       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0830 22:07:15.159226       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0830 22:07:15.956217       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0830 22:07:28.134806       1 trace.go:236] Trace[346750859]: "Get" accept:application/json, */*,audit-id:49e669cb-3689-4a50-b8fa-bb76ad1aed63,client:192.168.72.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-pause-820510,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (30-Aug-2023 22:07:27.563) (total time: 571ms):
	Trace[346750859]: ---"About to write a response" 570ms (22:07:28.134)
	Trace[346750859]: [571.306514ms] [571.306514ms] END
	I0830 22:07:28.138237       1 trace.go:236] Trace[470804109]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:193167bc-441c-4786-b68f-f202b808802b,client:192.168.72.94,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-pause-820510/status,user-agent:kubelet/v1.28.1 (linux/amd64) kubernetes/8dc49c4,verb:PATCH (30-Aug-2023 22:07:27.536) (total time: 601ms):
	Trace[470804109]: ["GuaranteedUpdate etcd3" audit-id:193167bc-441c-4786-b68f-f202b808802b,key:/pods/kube-system/etcd-pause-820510,type:*core.Pod,resource:pods 601ms (22:07:27.536)
	Trace[470804109]:  ---"Txn call completed" 594ms (22:07:28.134)]
	Trace[470804109]: ---"Object stored in database" 595ms (22:07:28.134)
	Trace[470804109]: [601.364626ms] [601.364626ms] END
	I0830 22:07:28.664423       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0830 22:07:28.666517       1 controller.go:624] quota admission added evaluator for: endpoints
	I0830 22:07:29.301568       1 trace.go:236] Trace[1334724778]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:5708b4bb-2cfb-489b-9b5f-ebf567f457c3,client:192.168.72.94,protocol:HTTP/2.0,resource:endpointslices,scope:resource,url:/apis/discovery.k8s.io/v1/namespaces/kube-system/endpointslices/kube-dns-9zk9x,user-agent:kube-controller-manager/v1.28.1 (linux/amd64) kubernetes/8dc49c4/system:serviceaccount:kube-system:endpointslice-controller,verb:PUT (30-Aug-2023 22:07:28.662) (total time: 639ms):
	Trace[1334724778]: ["GuaranteedUpdate etcd3" audit-id:5708b4bb-2cfb-489b-9b5f-ebf567f457c3,key:/endpointslices/kube-system/kube-dns-9zk9x,type:*discovery.EndpointSlice,resource:endpointslices.discovery.k8s.io 639ms (22:07:28.662)
	Trace[1334724778]:  ---"Txn call completed" 636ms (22:07:29.301)]
	Trace[1334724778]: [639.461143ms] [639.461143ms] END
	I0830 22:07:29.301802       1 trace.go:236] Trace[1539064073]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:47d720a2-0ac0-4c1a-938f-71fe601ecfbf,client:192.168.72.94,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/kube-dns,user-agent:kube-controller-manager/v1.28.1 (linux/amd64) kubernetes/8dc49c4/system:serviceaccount:kube-system:endpoint-controller,verb:PUT (30-Aug-2023 22:07:28.664) (total time: 636ms):
	Trace[1539064073]: ["GuaranteedUpdate etcd3" audit-id:47d720a2-0ac0-4c1a-938f-71fe601ecfbf,key:/services/endpoints/kube-system/kube-dns,type:*core.Endpoints,resource:endpoints 636ms (22:07:28.665)
	Trace[1539064073]:  ---"Txn call completed" 635ms (22:07:29.301)]
	Trace[1539064073]: [636.921542ms] [636.921542ms] END
	
	* 
	* ==> kube-apiserver [9f63a7000a5474c9bd555f63a728633274f571a6abd09eb70123aa0ff18ed639] <==
	* 
	* 
	* ==> kube-controller-manager [bc3c56300a5df8689f45defd891a4bc55e07eccb4993bb7dff07a103c355ff79] <==
	* 
	* 
	* ==> kube-controller-manager [c94ec52e9f036ccae39157f777cf3752fc01921063a92c275a989908716fc94c] <==
	* I0830 22:07:28.315349       1 shared_informer.go:318] Caches are synced for service account
	I0830 22:07:28.319703       1 shared_informer.go:318] Caches are synced for daemon sets
	I0830 22:07:28.324955       1 shared_informer.go:318] Caches are synced for taint
	I0830 22:07:28.325087       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0830 22:07:28.325337       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-820510"
	I0830 22:07:28.325441       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0830 22:07:28.325536       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0830 22:07:28.325847       1 taint_manager.go:211] "Sending events to api server"
	I0830 22:07:28.325992       1 event.go:307] "Event occurred" object="pause-820510" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-820510 event: Registered Node pause-820510 in Controller"
	I0830 22:07:28.337809       1 shared_informer.go:318] Caches are synced for ephemeral
	I0830 22:07:28.337889       1 shared_informer.go:318] Caches are synced for stateful set
	I0830 22:07:28.337900       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0830 22:07:28.337907       1 shared_informer.go:318] Caches are synced for TTL
	I0830 22:07:28.340252       1 shared_informer.go:318] Caches are synced for GC
	I0830 22:07:28.345708       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0830 22:07:28.354363       1 shared_informer.go:318] Caches are synced for HPA
	I0830 22:07:28.356795       1 shared_informer.go:318] Caches are synced for attach detach
	I0830 22:07:28.358468       1 shared_informer.go:318] Caches are synced for disruption
	I0830 22:07:28.393222       1 shared_informer.go:318] Caches are synced for crt configmap
	I0830 22:07:28.397553       1 shared_informer.go:318] Caches are synced for resource quota
	I0830 22:07:28.403088       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0830 22:07:28.483641       1 shared_informer.go:318] Caches are synced for resource quota
	I0830 22:07:28.854450       1 shared_informer.go:318] Caches are synced for garbage collector
	I0830 22:07:28.854636       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0830 22:07:28.858912       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [1e39fba900da87995573adae815deaf3f714153bdea032ea36c2da8686e5dbd3] <==
	* 
	* 
	* ==> kube-proxy [28bf8feea07b5dcc9895c0fcd8768749591527e05492bb8a684b8bba621e01a0] <==
	* I0830 22:07:12.977931       1 server_others.go:69] "Using iptables proxy"
	I0830 22:07:15.111645       1 node.go:141] Successfully retrieved node IP: 192.168.72.94
	I0830 22:07:15.353190       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0830 22:07:15.353348       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0830 22:07:15.358866       1 server_others.go:152] "Using iptables Proxier"
	I0830 22:07:15.359082       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 22:07:15.359529       1 server.go:846] "Version info" version="v1.28.1"
	I0830 22:07:15.359566       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:07:15.361253       1 config.go:315] "Starting node config controller"
	I0830 22:07:15.361290       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 22:07:15.362338       1 config.go:188] "Starting service config controller"
	I0830 22:07:15.362596       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 22:07:15.362712       1 config.go:97] "Starting endpoint slice config controller"
	I0830 22:07:15.362719       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 22:07:15.461741       1 shared_informer.go:318] Caches are synced for node config
	I0830 22:07:15.464646       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 22:07:15.464660       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [cb65566f89a0098d205ee5c68828b59865e52c6d3edbd6945cfd480bf1051b0b] <==
	* I0830 22:07:12.212532       1 serving.go:348] Generated self-signed cert in-memory
	W0830 22:07:15.055207       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0830 22:07:15.055444       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 22:07:15.055458       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0830 22:07:15.055471       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0830 22:07:15.119783       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0830 22:07:15.119834       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:07:15.125832       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0830 22:07:15.126025       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0830 22:07:15.126048       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 22:07:15.126077       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0830 22:07:15.226835       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ed97a3cf6616482f5e86e934d75ff4c960c6648644103c8a683039cbfbd99976] <==
	* 
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:05:44 UTC, ends at Wed 2023-08-30 22:07:33 UTC. --
	Aug 30 22:07:07 pause-820510 kubelet[1265]: I0830 22:07:07.589559    1265 status_manager.go:853] "Failed to get status for pod" podUID="78721dadef96167f7ab96108b4edc786" pod="kube-system/kube-apiserver-pause-820510" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-820510\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: I0830 22:07:07.590297    1265 status_manager.go:853] "Failed to get status for pod" podUID="61114403-040d-4f67-a7c0-91232c7b499e" pod="kube-system/kube-proxy-zjl5m" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjl5m\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.669889    1265 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-820510\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-820510?resourceVersion=0&timeout=10s\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.670394    1265 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-820510\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-820510?timeout=10s\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.670760    1265 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-820510\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-820510?timeout=10s\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.671071    1265 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-820510\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-820510?timeout=10s\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.671611    1265 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-820510\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-820510?timeout=10s\": dial tcp 192.168.72.94:8443: connect: connection refused"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: E0830 22:07:07.671667    1265 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: I0830 22:07:07.698381    1265 scope.go:117] "RemoveContainer" containerID="a06b4dab9d461a996e90c7378e63b3034a632f7cac47bc307602ca476ac85ddf"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: I0830 22:07:07.916559    1265 scope.go:117] "RemoveContainer" containerID="7acf9b92ae62ee58a768f304cc7ca0e1ac940575001c7b631c1281ac5e87fe2b"
	Aug 30 22:07:07 pause-820510 kubelet[1265]: I0830 22:07:07.974653    1265 scope.go:117] "RemoveContainer" containerID="08aeb861e5e608ed884fc3aeac04b271ccb2f019e1a43c288186f1feb79a118c"
	Aug 30 22:07:08 pause-820510 kubelet[1265]: I0830 22:07:08.040189    1265 scope.go:117] "RemoveContainer" containerID="a370e2b1dd5d2db3bc0c30c527d2ef75988ef1e82017e4afdb1aa2196b9c28a8"
	Aug 30 22:07:08 pause-820510 kubelet[1265]: I0830 22:07:08.133361    1265 scope.go:117] "RemoveContainer" containerID="fd28398dff7f2fd66df1ce09fe5ee6d665425eaba45e6e4865b7737b9bc3cbf8"
	Aug 30 22:07:11 pause-820510 kubelet[1265]: E0830 22:07:11.264814    1265 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-jrqc4_kube-system(5084572f-86f8-4338-82d1-f3df68aae5fd)\"" pod="kube-system/coredns-5dd5756b68-jrqc4" podUID="5084572f-86f8-4338-82d1-f3df68aae5fd"
	Aug 30 22:07:11 pause-820510 kubelet[1265]: I0830 22:07:11.634569    1265 scope.go:117] "RemoveContainer" containerID="aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	Aug 30 22:07:11 pause-820510 kubelet[1265]: E0830 22:07:11.636086    1265 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-jrqc4_kube-system(5084572f-86f8-4338-82d1-f3df68aae5fd)\"" pod="kube-system/coredns-5dd5756b68-jrqc4" podUID="5084572f-86f8-4338-82d1-f3df68aae5fd"
	Aug 30 22:07:12 pause-820510 kubelet[1265]: I0830 22:07:12.647609    1265 scope.go:117] "RemoveContainer" containerID="aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	Aug 30 22:07:12 pause-820510 kubelet[1265]: E0830 22:07:12.647970    1265 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-jrqc4_kube-system(5084572f-86f8-4338-82d1-f3df68aae5fd)\"" pod="kube-system/coredns-5dd5756b68-jrqc4" podUID="5084572f-86f8-4338-82d1-f3df68aae5fd"
	Aug 30 22:07:13 pause-820510 kubelet[1265]: I0830 22:07:13.656916    1265 scope.go:117] "RemoveContainer" containerID="aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	Aug 30 22:07:13 pause-820510 kubelet[1265]: E0830 22:07:13.657237    1265 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5dd5756b68-jrqc4_kube-system(5084572f-86f8-4338-82d1-f3df68aae5fd)\"" pod="kube-system/coredns-5dd5756b68-jrqc4" podUID="5084572f-86f8-4338-82d1-f3df68aae5fd"
	Aug 30 22:07:16 pause-820510 kubelet[1265]: E0830 22:07:16.847972    1265 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:07:16 pause-820510 kubelet[1265]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:07:16 pause-820510 kubelet[1265]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:07:16 pause-820510 kubelet[1265]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:07:25 pause-820510 kubelet[1265]: I0830 22:07:25.716398    1265 scope.go:117] "RemoveContainer" containerID="aa0b2dfde6334947461f098545a47c348b551dfa1d96d004c5d97d956d73a703"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:07:32.064735  991308 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17114-955377/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-820510 -n pause-820510
helpers_test.go:261: (dbg) Run:  kubectl --context pause-820510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (58.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-698195 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-698195 --alsologtostderr -v=3: exit status 82 (2m0.994028305s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-698195"  ...
	* Stopping node "no-preload-698195"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:11:16.725116  993819 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:11:16.725405  993819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:11:16.725446  993819 out.go:309] Setting ErrFile to fd 2...
	I0830 22:11:16.725464  993819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:11:16.725771  993819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:11:16.726130  993819 out.go:303] Setting JSON to false
	I0830 22:11:16.726318  993819 mustload.go:65] Loading cluster: no-preload-698195
	I0830 22:11:16.726822  993819 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:11:16.726968  993819 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/config.json ...
	I0830 22:11:16.727201  993819 mustload.go:65] Loading cluster: no-preload-698195
	I0830 22:11:16.727401  993819 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:11:16.727466  993819 stop.go:39] StopHost: no-preload-698195
	I0830 22:11:16.728064  993819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:11:16.728158  993819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:11:16.749239  993819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40147
	I0830 22:11:16.749944  993819 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:11:16.750692  993819 main.go:141] libmachine: Using API Version  1
	I0830 22:11:16.750715  993819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:11:16.751049  993819 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:11:16.753576  993819 out.go:177] * Stopping node "no-preload-698195"  ...
	I0830 22:11:16.755359  993819 main.go:141] libmachine: Stopping "no-preload-698195"...
	I0830 22:11:16.755371  993819 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:11:16.757588  993819 main.go:141] libmachine: (no-preload-698195) Calling .Stop
	I0830 22:11:16.761340  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 0/60
	I0830 22:11:17.763485  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 1/60
	I0830 22:11:18.765273  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 2/60
	I0830 22:11:19.767359  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 3/60
	I0830 22:11:20.768980  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 4/60
	I0830 22:11:21.770998  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 5/60
	I0830 22:11:22.772841  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 6/60
	I0830 22:11:23.774185  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 7/60
	I0830 22:11:24.775839  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 8/60
	I0830 22:11:25.777376  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 9/60
	I0830 22:11:26.779119  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 10/60
	I0830 22:11:27.780742  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 11/60
	I0830 22:11:28.782219  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 12/60
	I0830 22:11:29.783667  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 13/60
	I0830 22:11:30.785213  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 14/60
	I0830 22:11:31.787407  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 15/60
	I0830 22:11:32.788776  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 16/60
	I0830 22:11:33.790240  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 17/60
	I0830 22:11:34.791529  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 18/60
	I0830 22:11:35.793313  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 19/60
	I0830 22:11:36.795648  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 20/60
	I0830 22:11:37.797010  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 21/60
	I0830 22:11:38.798286  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 22/60
	I0830 22:11:39.799872  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 23/60
	I0830 22:11:40.801067  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 24/60
	I0830 22:11:41.802715  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 25/60
	I0830 22:11:42.803995  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 26/60
	I0830 22:11:43.806109  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 27/60
	I0830 22:11:44.807595  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 28/60
	I0830 22:11:45.809353  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 29/60
	I0830 22:11:46.811806  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 30/60
	I0830 22:11:47.814162  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 31/60
	I0830 22:11:48.815844  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 32/60
	I0830 22:11:49.817552  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 33/60
	I0830 22:11:50.818908  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 34/60
	I0830 22:11:51.820804  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 35/60
	I0830 22:11:52.823191  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 36/60
	I0830 22:11:53.824568  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 37/60
	I0830 22:11:54.826228  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 38/60
	I0830 22:11:55.827526  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 39/60
	I0830 22:11:56.829755  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 40/60
	I0830 22:11:57.831560  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 41/60
	I0830 22:11:58.833044  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 42/60
	I0830 22:11:59.834391  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 43/60
	I0830 22:12:00.836009  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 44/60
	I0830 22:12:01.838016  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 45/60
	I0830 22:12:02.840065  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 46/60
	I0830 22:12:03.841454  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 47/60
	I0830 22:12:04.842966  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 48/60
	I0830 22:12:05.844630  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 49/60
	I0830 22:12:06.846800  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 50/60
	I0830 22:12:07.848347  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 51/60
	I0830 22:12:08.849896  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 52/60
	I0830 22:12:09.851293  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 53/60
	I0830 22:12:10.853326  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 54/60
	I0830 22:12:11.855420  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 55/60
	I0830 22:12:12.857430  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 56/60
	I0830 22:12:13.859385  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 57/60
	I0830 22:12:14.860981  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 58/60
	I0830 22:12:15.862534  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 59/60
	I0830 22:12:16.863289  993819 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0830 22:12:16.863366  993819 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:12:16.863392  993819 retry.go:31] will retry after 644.751717ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:12:17.509270  993819 stop.go:39] StopHost: no-preload-698195
	I0830 22:12:17.509838  993819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:12:17.509904  993819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:12:17.525225  993819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0830 22:12:17.525698  993819 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:12:17.526221  993819 main.go:141] libmachine: Using API Version  1
	I0830 22:12:17.526241  993819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:12:17.526625  993819 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:12:17.529097  993819 out.go:177] * Stopping node "no-preload-698195"  ...
	I0830 22:12:17.530714  993819 main.go:141] libmachine: Stopping "no-preload-698195"...
	I0830 22:12:17.530733  993819 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:12:17.532753  993819 main.go:141] libmachine: (no-preload-698195) Calling .Stop
	I0830 22:12:17.536552  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 0/60
	I0830 22:12:18.538163  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 1/60
	I0830 22:12:19.539464  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 2/60
	I0830 22:12:20.540665  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 3/60
	I0830 22:12:21.542343  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 4/60
	I0830 22:12:22.543648  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 5/60
	I0830 22:12:23.545060  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 6/60
	I0830 22:12:24.546449  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 7/60
	I0830 22:12:25.548859  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 8/60
	I0830 22:12:26.550228  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 9/60
	I0830 22:12:27.552047  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 10/60
	I0830 22:12:28.553318  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 11/60
	I0830 22:12:29.554705  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 12/60
	I0830 22:12:30.556095  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 13/60
	I0830 22:12:31.558366  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 14/60
	I0830 22:12:32.559857  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 15/60
	I0830 22:12:33.561302  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 16/60
	I0830 22:12:34.562747  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 17/60
	I0830 22:12:35.564189  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 18/60
	I0830 22:12:36.566174  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 19/60
	I0830 22:12:37.567811  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 20/60
	I0830 22:12:38.569556  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 21/60
	I0830 22:12:39.571067  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 22/60
	I0830 22:12:40.572385  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 23/60
	I0830 22:12:41.573790  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 24/60
	I0830 22:12:42.575148  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 25/60
	I0830 22:12:43.576518  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 26/60
	I0830 22:12:44.578005  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 27/60
	I0830 22:12:45.579180  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 28/60
	I0830 22:12:46.580559  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 29/60
	I0830 22:12:47.582518  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 30/60
	I0830 22:12:48.584594  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 31/60
	I0830 22:12:49.585974  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 32/60
	I0830 22:12:50.587372  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 33/60
	I0830 22:12:51.588780  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 34/60
	I0830 22:12:52.590078  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 35/60
	I0830 22:12:53.591375  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 36/60
	I0830 22:12:54.592678  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 37/60
	I0830 22:12:55.594413  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 38/60
	I0830 22:12:56.595658  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 39/60
	I0830 22:12:57.597879  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 40/60
	I0830 22:12:58.599250  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 41/60
	I0830 22:12:59.600657  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 42/60
	I0830 22:13:00.602029  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 43/60
	I0830 22:13:01.603248  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 44/60
	I0830 22:13:02.605485  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 45/60
	I0830 22:13:03.606949  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 46/60
	I0830 22:13:04.608304  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 47/60
	I0830 22:13:05.609653  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 48/60
	I0830 22:13:06.610975  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 49/60
	I0830 22:13:07.613044  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 50/60
	I0830 22:13:08.614504  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 51/60
	I0830 22:13:09.615939  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 52/60
	I0830 22:13:10.617192  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 53/60
	I0830 22:13:11.618515  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 54/60
	I0830 22:13:12.620740  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 55/60
	I0830 22:13:13.622017  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 56/60
	I0830 22:13:14.623487  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 57/60
	I0830 22:13:15.624832  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 58/60
	I0830 22:13:16.626171  993819 main.go:141] libmachine: (no-preload-698195) Waiting for machine to stop 59/60
	I0830 22:13:17.627057  993819 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0830 22:13:17.627109  993819 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:13:17.629065  993819 out.go:177] 
	W0830 22:13:17.630350  993819 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0830 22:13:17.630361  993819 out.go:239] * 
	* 
	W0830 22:13:17.633629  993819 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:13:17.634942  993819 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-698195 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-698195 -n no-preload-698195
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-698195 -n no-preload-698195: exit status 3 (18.671776518s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:13:36.308093  994349 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.28:22: connect: no route to host
	E0830 22:13:36.308122  994349 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.28:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-698195" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-208903 --alsologtostderr -v=3
E0830 22:11:40.125467  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 22:11:49.716080  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 22:11:57.076591  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-208903 --alsologtostderr -v=3: exit status 82 (2m1.531902948s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-208903"  ...
	* Stopping node "embed-certs-208903"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:11:24.204503  993914 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:11:24.204625  993914 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:11:24.204634  993914 out.go:309] Setting ErrFile to fd 2...
	I0830 22:11:24.204638  993914 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:11:24.204849  993914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:11:24.205082  993914 out.go:303] Setting JSON to false
	I0830 22:11:24.205166  993914 mustload.go:65] Loading cluster: embed-certs-208903
	I0830 22:11:24.205487  993914 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:11:24.205572  993914 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/embed-certs-208903/config.json ...
	I0830 22:11:24.205734  993914 mustload.go:65] Loading cluster: embed-certs-208903
	I0830 22:11:24.205848  993914 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:11:24.205880  993914 stop.go:39] StopHost: embed-certs-208903
	I0830 22:11:24.206211  993914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:11:24.206259  993914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:11:24.221703  993914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0830 22:11:24.222193  993914 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:11:24.222828  993914 main.go:141] libmachine: Using API Version  1
	I0830 22:11:24.222853  993914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:11:24.223266  993914 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:11:24.226151  993914 out.go:177] * Stopping node "embed-certs-208903"  ...
	I0830 22:11:24.227607  993914 main.go:141] libmachine: Stopping "embed-certs-208903"...
	I0830 22:11:24.227627  993914 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:11:24.229452  993914 main.go:141] libmachine: (embed-certs-208903) Calling .Stop
	I0830 22:11:24.232806  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 0/60
	I0830 22:11:25.234194  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 1/60
	I0830 22:11:26.235955  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 2/60
	I0830 22:11:27.238557  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 3/60
	I0830 22:11:28.240068  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 4/60
	I0830 22:11:29.242236  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 5/60
	I0830 22:11:30.243871  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 6/60
	I0830 22:11:31.245219  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 7/60
	I0830 22:11:32.247745  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 8/60
	I0830 22:11:33.249119  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 9/60
	I0830 22:11:34.251151  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 10/60
	I0830 22:11:35.252716  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 11/60
	I0830 22:11:36.254221  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 12/60
	I0830 22:11:37.255761  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 13/60
	I0830 22:11:38.257612  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 14/60
	I0830 22:11:39.260068  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 15/60
	I0830 22:11:40.262342  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 16/60
	I0830 22:11:41.264021  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 17/60
	I0830 22:11:42.265272  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 18/60
	I0830 22:11:43.266759  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 19/60
	I0830 22:11:44.269124  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 20/60
	I0830 22:11:45.270544  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 21/60
	I0830 22:11:46.272057  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 22/60
	I0830 22:11:47.274281  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 23/60
	I0830 22:11:48.276285  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 24/60
	I0830 22:11:49.278575  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 25/60
	I0830 22:11:50.280074  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 26/60
	I0830 22:11:51.281485  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 27/60
	I0830 22:11:52.282752  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 28/60
	I0830 22:11:53.284176  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 29/60
	I0830 22:11:54.286437  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 30/60
	I0830 22:11:55.287803  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 31/60
	I0830 22:11:56.289577  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 32/60
	I0830 22:11:57.290820  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 33/60
	I0830 22:11:58.292143  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 34/60
	I0830 22:11:59.294036  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 35/60
	I0830 22:12:00.295750  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 36/60
	I0830 22:12:01.297037  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 37/60
	I0830 22:12:02.298654  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 38/60
	I0830 22:12:03.300199  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 39/60
	I0830 22:12:04.302163  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 40/60
	I0830 22:12:05.303451  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 41/60
	I0830 22:12:06.304854  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 42/60
	I0830 22:12:07.306280  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 43/60
	I0830 22:12:08.307737  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 44/60
	I0830 22:12:09.310003  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 45/60
	I0830 22:12:10.311434  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 46/60
	I0830 22:12:11.313282  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 47/60
	I0830 22:12:12.314731  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 48/60
	I0830 22:12:13.316246  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 49/60
	I0830 22:12:14.318378  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 50/60
	I0830 22:12:15.319730  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 51/60
	I0830 22:12:16.321183  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 52/60
	I0830 22:12:17.322621  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 53/60
	I0830 22:12:18.324132  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 54/60
	I0830 22:12:19.326043  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 55/60
	I0830 22:12:20.327464  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 56/60
	I0830 22:12:21.329463  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 57/60
	I0830 22:12:22.331419  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 58/60
	I0830 22:12:23.333129  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 59/60
	I0830 22:12:24.334415  993914 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0830 22:12:24.334466  993914 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:12:24.334488  993914 retry.go:31] will retry after 1.222796583s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:12:25.557422  993914 stop.go:39] StopHost: embed-certs-208903
	I0830 22:12:25.557788  993914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:12:25.557841  993914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:12:25.573059  993914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0830 22:12:25.573515  993914 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:12:25.574185  993914 main.go:141] libmachine: Using API Version  1
	I0830 22:12:25.574230  993914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:12:25.574595  993914 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:12:25.576543  993914 out.go:177] * Stopping node "embed-certs-208903"  ...
	I0830 22:12:25.577891  993914 main.go:141] libmachine: Stopping "embed-certs-208903"...
	I0830 22:12:25.577907  993914 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:12:25.579678  993914 main.go:141] libmachine: (embed-certs-208903) Calling .Stop
	I0830 22:12:25.583461  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 0/60
	I0830 22:12:26.584778  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 1/60
	I0830 22:12:27.586160  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 2/60
	I0830 22:12:28.587546  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 3/60
	I0830 22:12:29.589410  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 4/60
	I0830 22:12:30.591570  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 5/60
	I0830 22:12:31.592979  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 6/60
	I0830 22:12:32.594221  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 7/60
	I0830 22:12:33.595585  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 8/60
	I0830 22:12:34.596812  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 9/60
	I0830 22:12:35.598484  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 10/60
	I0830 22:12:36.599917  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 11/60
	I0830 22:12:37.601313  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 12/60
	I0830 22:12:38.602619  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 13/60
	I0830 22:12:39.604435  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 14/60
	I0830 22:12:40.606503  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 15/60
	I0830 22:12:41.607818  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 16/60
	I0830 22:12:42.609153  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 17/60
	I0830 22:12:43.610324  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 18/60
	I0830 22:12:44.611707  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 19/60
	I0830 22:12:45.613555  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 20/60
	I0830 22:12:46.615106  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 21/60
	I0830 22:12:47.616505  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 22/60
	I0830 22:12:48.618073  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 23/60
	I0830 22:12:49.619346  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 24/60
	I0830 22:12:50.621034  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 25/60
	I0830 22:12:51.622408  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 26/60
	I0830 22:12:52.623501  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 27/60
	I0830 22:12:53.624813  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 28/60
	I0830 22:12:54.626002  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 29/60
	I0830 22:12:55.628346  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 30/60
	I0830 22:12:56.629490  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 31/60
	I0830 22:12:57.630756  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 32/60
	I0830 22:12:58.632056  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 33/60
	I0830 22:12:59.634120  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 34/60
	I0830 22:13:00.635651  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 35/60
	I0830 22:13:01.636795  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 36/60
	I0830 22:13:02.638033  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 37/60
	I0830 22:13:03.639304  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 38/60
	I0830 22:13:04.640606  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 39/60
	I0830 22:13:05.642127  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 40/60
	I0830 22:13:06.643363  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 41/60
	I0830 22:13:07.644567  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 42/60
	I0830 22:13:08.645962  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 43/60
	I0830 22:13:09.647312  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 44/60
	I0830 22:13:10.648772  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 45/60
	I0830 22:13:11.650008  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 46/60
	I0830 22:13:12.651259  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 47/60
	I0830 22:13:13.652564  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 48/60
	I0830 22:13:14.653901  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 49/60
	I0830 22:13:15.655567  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 50/60
	I0830 22:13:16.656902  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 51/60
	I0830 22:13:17.658035  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 52/60
	I0830 22:13:18.659312  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 53/60
	I0830 22:13:19.660680  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 54/60
	I0830 22:13:20.662478  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 55/60
	I0830 22:13:21.663894  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 56/60
	I0830 22:13:22.665094  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 57/60
	I0830 22:13:23.666263  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 58/60
	I0830 22:13:24.667514  993914 main.go:141] libmachine: (embed-certs-208903) Waiting for machine to stop 59/60
	I0830 22:13:25.668439  993914 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0830 22:13:25.668486  993914 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:13:25.670504  993914 out.go:177] 
	W0830 22:13:25.672011  993914 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0830 22:13:25.672024  993914 out.go:239] * 
	* 
	W0830 22:13:25.675146  993914 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:13:25.676648  993914 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-208903 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903: exit status 3 (18.565737638s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:13:44.244106  994400 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.159:22: connect: no route to host
	E0830 22:13:44.244126  994400 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-208903" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-791007 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-791007 --alsologtostderr -v=3: exit status 82 (2m0.920247236s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-791007"  ...
	* Stopping node "default-k8s-diff-port-791007"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:12:40.077021  994231 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:12:40.077151  994231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:12:40.077164  994231 out.go:309] Setting ErrFile to fd 2...
	I0830 22:12:40.077169  994231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:12:40.077365  994231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:12:40.077613  994231 out.go:303] Setting JSON to false
	I0830 22:12:40.077710  994231 mustload.go:65] Loading cluster: default-k8s-diff-port-791007
	I0830 22:12:40.078070  994231 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:12:40.078169  994231 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/config.json ...
	I0830 22:12:40.078348  994231 mustload.go:65] Loading cluster: default-k8s-diff-port-791007
	I0830 22:12:40.078492  994231 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:12:40.078537  994231 stop.go:39] StopHost: default-k8s-diff-port-791007
	I0830 22:12:40.078903  994231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:12:40.078966  994231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:12:40.095188  994231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I0830 22:12:40.095839  994231 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:12:40.096510  994231 main.go:141] libmachine: Using API Version  1
	I0830 22:12:40.096546  994231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:12:40.096937  994231 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:12:40.099699  994231 out.go:177] * Stopping node "default-k8s-diff-port-791007"  ...
	I0830 22:12:40.101215  994231 main.go:141] libmachine: Stopping "default-k8s-diff-port-791007"...
	I0830 22:12:40.101238  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:12:40.102791  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Stop
	I0830 22:12:40.105929  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 0/60
	I0830 22:12:41.107408  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 1/60
	I0830 22:12:42.108615  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 2/60
	I0830 22:12:43.110323  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 3/60
	I0830 22:12:44.111643  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 4/60
	I0830 22:12:45.113569  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 5/60
	I0830 22:12:46.115425  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 6/60
	I0830 22:12:47.116768  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 7/60
	I0830 22:12:48.118303  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 8/60
	I0830 22:12:49.119598  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 9/60
	I0830 22:12:50.120970  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 10/60
	I0830 22:12:51.122835  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 11/60
	I0830 22:12:52.124174  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 12/60
	I0830 22:12:53.125578  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 13/60
	I0830 22:12:54.126924  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 14/60
	I0830 22:12:55.129262  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 15/60
	I0830 22:12:56.130582  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 16/60
	I0830 22:12:57.132283  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 17/60
	I0830 22:12:58.133641  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 18/60
	I0830 22:12:59.135016  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 19/60
	I0830 22:13:00.137199  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 20/60
	I0830 22:13:01.138481  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 21/60
	I0830 22:13:02.139970  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 22/60
	I0830 22:13:03.141281  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 23/60
	I0830 22:13:04.142582  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 24/60
	I0830 22:13:05.144157  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 25/60
	I0830 22:13:06.145531  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 26/60
	I0830 22:13:07.146988  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 27/60
	I0830 22:13:08.148491  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 28/60
	I0830 22:13:09.149917  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 29/60
	I0830 22:13:10.152091  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 30/60
	I0830 22:13:11.153384  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 31/60
	I0830 22:13:12.154668  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 32/60
	I0830 22:13:13.156096  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 33/60
	I0830 22:13:14.157541  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 34/60
	I0830 22:13:15.159140  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 35/60
	I0830 22:13:16.160748  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 36/60
	I0830 22:13:17.161978  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 37/60
	I0830 22:13:18.163362  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 38/60
	I0830 22:13:19.164806  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 39/60
	I0830 22:13:20.167028  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 40/60
	I0830 22:13:21.168543  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 41/60
	I0830 22:13:22.169677  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 42/60
	I0830 22:13:23.170970  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 43/60
	I0830 22:13:24.172142  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 44/60
	I0830 22:13:25.173953  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 45/60
	I0830 22:13:26.175337  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 46/60
	I0830 22:13:27.176913  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 47/60
	I0830 22:13:28.178191  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 48/60
	I0830 22:13:29.179558  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 49/60
	I0830 22:13:30.181586  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 50/60
	I0830 22:13:31.182987  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 51/60
	I0830 22:13:32.184314  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 52/60
	I0830 22:13:33.185708  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 53/60
	I0830 22:13:34.186978  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 54/60
	I0830 22:13:35.188998  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 55/60
	I0830 22:13:36.190403  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 56/60
	I0830 22:13:37.191922  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 57/60
	I0830 22:13:38.193462  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 58/60
	I0830 22:13:39.194875  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 59/60
	I0830 22:13:40.196184  994231 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0830 22:13:40.196266  994231 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:13:40.196297  994231 retry.go:31] will retry after 622.283251ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:13:40.818745  994231 stop.go:39] StopHost: default-k8s-diff-port-791007
	I0830 22:13:40.819173  994231 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:13:40.819234  994231 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:13:40.833923  994231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43143
	I0830 22:13:40.834424  994231 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:13:40.834979  994231 main.go:141] libmachine: Using API Version  1
	I0830 22:13:40.835001  994231 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:13:40.835371  994231 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:13:40.837436  994231 out.go:177] * Stopping node "default-k8s-diff-port-791007"  ...
	I0830 22:13:40.838788  994231 main.go:141] libmachine: Stopping "default-k8s-diff-port-791007"...
	I0830 22:13:40.838806  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:13:40.840342  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Stop
	I0830 22:13:40.843450  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 0/60
	I0830 22:13:41.844755  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 1/60
	I0830 22:13:42.846195  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 2/60
	I0830 22:13:43.848221  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 3/60
	I0830 22:13:44.850495  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 4/60
	I0830 22:13:45.852003  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 5/60
	I0830 22:13:46.853562  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 6/60
	I0830 22:13:47.855086  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 7/60
	I0830 22:13:48.856594  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 8/60
	I0830 22:13:49.858118  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 9/60
	I0830 22:13:50.860053  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 10/60
	I0830 22:13:51.861383  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 11/60
	I0830 22:13:52.862715  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 12/60
	I0830 22:13:53.864114  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 13/60
	I0830 22:13:54.865383  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 14/60
	I0830 22:13:55.867329  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 15/60
	I0830 22:13:56.868684  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 16/60
	I0830 22:13:57.870178  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 17/60
	I0830 22:13:58.871609  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 18/60
	I0830 22:13:59.873162  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 19/60
	I0830 22:14:00.874785  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 20/60
	I0830 22:14:01.876180  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 21/60
	I0830 22:14:02.877555  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 22/60
	I0830 22:14:03.878788  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 23/60
	I0830 22:14:04.880147  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 24/60
	I0830 22:14:05.881699  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 25/60
	I0830 22:14:06.882956  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 26/60
	I0830 22:14:07.884228  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 27/60
	I0830 22:14:08.886287  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 28/60
	I0830 22:14:09.887711  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 29/60
	I0830 22:14:10.889257  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 30/60
	I0830 22:14:11.890620  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 31/60
	I0830 22:14:12.891909  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 32/60
	I0830 22:14:13.893287  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 33/60
	I0830 22:14:14.894619  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 34/60
	I0830 22:14:15.895965  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 35/60
	I0830 22:14:16.897304  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 36/60
	I0830 22:14:17.899007  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 37/60
	I0830 22:14:18.900508  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 38/60
	I0830 22:14:19.902437  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 39/60
	I0830 22:14:20.904256  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 40/60
	I0830 22:14:21.905567  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 41/60
	I0830 22:14:22.906884  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 42/60
	I0830 22:14:23.908137  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 43/60
	I0830 22:14:24.909374  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 44/60
	I0830 22:14:25.910595  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 45/60
	I0830 22:14:26.912756  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 46/60
	I0830 22:14:27.914791  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 47/60
	I0830 22:14:28.915890  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 48/60
	I0830 22:14:29.917090  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 49/60
	I0830 22:14:30.918384  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 50/60
	I0830 22:14:31.919621  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 51/60
	I0830 22:14:32.920796  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 52/60
	I0830 22:14:33.921939  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 53/60
	I0830 22:14:34.923025  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 54/60
	I0830 22:14:35.924908  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 55/60
	I0830 22:14:36.926103  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 56/60
	I0830 22:14:37.927080  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 57/60
	I0830 22:14:38.928228  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 58/60
	I0830 22:14:39.929958  994231 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for machine to stop 59/60
	I0830 22:14:40.930891  994231 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0830 22:14:40.930938  994231 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:14:40.933014  994231 out.go:177] 
	W0830 22:14:40.934596  994231 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0830 22:14:40.934615  994231 out.go:239] * 
	* 
	W0830 22:14:40.937849  994231 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:14:40.939355  994231 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-791007 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007: exit status 3 (18.566323795s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:14:59.508130  994981 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.104:22: connect: no route to host
	E0830 22:14:59.508157  994981 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.104:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-791007" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-698195 -n no-preload-698195
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-698195 -n no-preload-698195: exit status 3 (3.167698135s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:13:39.476154  994454 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.28:22: connect: no route to host
	E0830 22:13:39.476174  994454 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.28:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-698195 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-698195 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.16012673s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.28:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-698195 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-698195 -n no-preload-698195
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-698195 -n no-preload-698195: exit status 3 (3.055794434s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:13:48.692147  994567 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.28:22: connect: no route to host
	E0830 22:13:48.692174  994567 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.28:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-698195" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903: exit status 3 (3.200026064s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:13:47.444130  994537 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.159:22: connect: no route to host
	E0830 22:13:47.444148  994537 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.159:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-208903 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-208903 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154681137s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.159:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-208903 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903: exit status 3 (3.061316491s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:13:56.660145  994675 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.159:22: connect: no route to host
	E0830 22:13:56.660173  994675 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-208903" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (410.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-208903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-208903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: exit status 80 (5m54.478992142s)

                                                
                                                
-- stdout --
	* [embed-certs-208903] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node embed-certs-208903 in cluster embed-certs-208903
	* Restarting existing kvm2 VM for "embed-certs-208903" ...
	* Updating the running kvm2 "embed-certs-208903" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:13:56.716350  994705 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:13:56.716466  994705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:13:56.716474  994705 out.go:309] Setting ErrFile to fd 2...
	I0830 22:13:56.716478  994705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:13:56.716683  994705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:13:56.717232  994705 out.go:303] Setting JSON to false
	I0830 22:13:56.718196  994705 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14184,"bootTime":1693419453,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:13:56.718268  994705 start.go:138] virtualization: kvm guest
	I0830 22:13:56.721795  994705 out.go:177] * [embed-certs-208903] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:13:56.723387  994705 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:13:56.723479  994705 notify.go:220] Checking for updates...
	I0830 22:13:56.724895  994705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:13:56.726573  994705 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:13:56.728145  994705 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:13:56.729672  994705 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:13:56.731206  994705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:13:56.733123  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:13:56.733478  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:13:56.733544  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:13:56.747823  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42125
	I0830 22:13:56.748205  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:13:56.748900  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:13:56.748926  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:13:56.749267  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:13:56.749445  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:13:56.749698  994705 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:13:56.749970  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:13:56.750003  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:13:56.764216  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0830 22:13:56.764575  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:13:56.765009  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:13:56.765034  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:13:56.765403  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:13:56.765594  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:13:56.798754  994705 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:13:56.800300  994705 start.go:298] selected driver: kvm2
	I0830 22:13:56.800320  994705 start.go:902] validating driver "kvm2" against &{Name:embed-certs-208903 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-208903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:13:56.800474  994705 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:13:56.801339  994705 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:13:56.801411  994705 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:13:56.816123  994705 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:13:56.816557  994705 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 22:13:56.816597  994705 cni.go:84] Creating CNI manager for ""
	I0830 22:13:56.816609  994705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:13:56.816629  994705 start_flags.go:319] config:
	{Name:embed-certs-208903 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-208903 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.159 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:13:56.816814  994705 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:13:56.818991  994705 out.go:177] * Starting control plane node embed-certs-208903 in cluster embed-certs-208903
	I0830 22:13:56.820639  994705 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:13:56.820676  994705 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0830 22:13:56.820686  994705 cache.go:57] Caching tarball of preloaded images
	I0830 22:13:56.820764  994705 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:13:56.820809  994705 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 22:13:56.820935  994705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/embed-certs-208903/config.json ...
	I0830 22:13:56.821141  994705 start.go:365] acquiring machines lock for embed-certs-208903: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:26.288205  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 4m29.4670209s
	I0830 22:18:26.288261  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:26.288276  994705 fix.go:54] fixHost starting: 
	I0830 22:18:26.288621  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:26.288656  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:26.304048  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0830 22:18:26.304613  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:26.305138  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:18:26.305164  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:26.305518  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:26.305719  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:26.305843  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:18:26.307597  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Stopped err=<nil>
	I0830 22:18:26.307639  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	W0830 22:18:26.307827  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:26.309985  994705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208903" ...
	I0830 22:18:26.311551  994705 main.go:141] libmachine: (embed-certs-208903) Calling .Start
	I0830 22:18:26.311750  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring networks are active...
	I0830 22:18:26.312528  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network default is active
	I0830 22:18:26.312814  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network mk-embed-certs-208903 is active
	I0830 22:18:26.313153  994705 main.go:141] libmachine: (embed-certs-208903) Getting domain xml...
	I0830 22:18:26.313857  994705 main.go:141] libmachine: (embed-certs-208903) Creating domain...
	I0830 22:18:27.529120  994705 main.go:141] libmachine: (embed-certs-208903) Waiting to get IP...
	I0830 22:18:27.530028  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.530390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.530515  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.530404  996319 retry.go:31] will retry after 311.351139ms: waiting for machine to come up
	I0830 22:18:27.843013  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.843398  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.843427  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.843337  996319 retry.go:31] will retry after 367.953943ms: waiting for machine to come up
	I0830 22:18:28.213214  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.213785  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.213820  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.213722  996319 retry.go:31] will retry after 424.275825ms: waiting for machine to come up
	I0830 22:18:28.639216  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.639670  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.639707  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.639609  996319 retry.go:31] will retry after 502.321201ms: waiting for machine to come up
	I0830 22:18:29.143240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.143823  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.143850  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.143790  996319 retry.go:31] will retry after 680.495047ms: waiting for machine to come up
	I0830 22:18:29.825462  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.825879  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.825904  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.825836  996319 retry.go:31] will retry after 756.63617ms: waiting for machine to come up
	I0830 22:18:30.583723  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:30.584179  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:30.584212  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:30.584118  996319 retry.go:31] will retry after 851.722792ms: waiting for machine to come up
	I0830 22:18:31.437603  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:31.438031  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:31.438063  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:31.437986  996319 retry.go:31] will retry after 1.214893807s: waiting for machine to come up
	I0830 22:18:32.654351  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:32.654803  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:32.654829  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:32.654756  996319 retry.go:31] will retry after 1.574180335s: waiting for machine to come up
	I0830 22:18:34.231491  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:34.231911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:34.231944  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:34.231826  996319 retry.go:31] will retry after 1.99107048s: waiting for machine to come up
	I0830 22:18:36.225911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:36.226336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:36.226363  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:36.226283  996319 retry.go:31] will retry after 1.816508761s: waiting for machine to come up
	I0830 22:18:38.044672  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:38.045061  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:38.045094  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:38.045021  996319 retry.go:31] will retry after 2.343148299s: waiting for machine to come up
	I0830 22:18:40.389346  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:40.389753  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:40.389778  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:40.389700  996319 retry.go:31] will retry after 3.682098761s: waiting for machine to come up
	I0830 22:18:44.074916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075386  994705 main.go:141] libmachine: (embed-certs-208903) Found IP for machine: 192.168.50.159
	I0830 22:18:44.075411  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has current primary IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075418  994705 main.go:141] libmachine: (embed-certs-208903) Reserving static IP address...
	I0830 22:18:44.075899  994705 main.go:141] libmachine: (embed-certs-208903) Reserved static IP address: 192.168.50.159
	I0830 22:18:44.075928  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.075939  994705 main.go:141] libmachine: (embed-certs-208903) Waiting for SSH to be available...
	I0830 22:18:44.075959  994705 main.go:141] libmachine: (embed-certs-208903) DBG | skip adding static IP to network mk-embed-certs-208903 - found existing host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"}
	I0830 22:18:44.075968  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Getting to WaitForSSH function...
	I0830 22:18:44.078068  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.078436  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH client type: external
	I0830 22:18:44.078533  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa (-rw-------)
	I0830 22:18:44.078569  994705 main.go:141] libmachine: (embed-certs-208903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:18:44.078590  994705 main.go:141] libmachine: (embed-certs-208903) DBG | About to run SSH command:
	I0830 22:18:44.078622  994705 main.go:141] libmachine: (embed-certs-208903) DBG | exit 0
	I0830 22:18:44.167514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | SSH cmd err, output: <nil>: 
	I0830 22:18:44.167898  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetConfigRaw
	I0830 22:18:44.168594  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.170974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.171370  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171696  994705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/embed-certs-208903/config.json ...
	I0830 22:18:44.171967  994705 machine.go:88] provisioning docker machine ...
	I0830 22:18:44.171989  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:44.172184  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172371  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:18:44.172397  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172563  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.174522  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174861  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.174894  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174988  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.175159  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175286  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175413  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.175627  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.176111  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.176132  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:18:44.309192  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:18:44.309225  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.311931  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312327  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.312362  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312512  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.312727  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.312919  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.313048  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.313215  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.313623  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.313638  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:18:44.440529  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:44.440594  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:18:44.440641  994705 buildroot.go:174] setting up certificates
	I0830 22:18:44.440653  994705 provision.go:83] configureAuth start
	I0830 22:18:44.440663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.440943  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.443289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443663  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.443705  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443805  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.445987  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446297  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.446328  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446462  994705 provision.go:138] copyHostCerts
	I0830 22:18:44.446524  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:18:44.446550  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:18:44.446638  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:18:44.446750  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:18:44.446763  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:18:44.446800  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:18:44.446907  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:18:44.446919  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:18:44.446955  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:18:44.447036  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:18:44.664313  994705 provision.go:172] copyRemoteCerts
	I0830 22:18:44.664387  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:18:44.664434  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.666819  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667160  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.667192  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667338  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.667565  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.667687  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.667839  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:18:44.756922  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:18:44.780430  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:18:44.803396  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:18:44.825975  994705 provision.go:86] duration metric: configureAuth took 385.307932ms
	I0830 22:18:44.826006  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:18:44.826230  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:18:44.826334  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.828862  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829199  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.829240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829383  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.829606  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829770  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829907  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.830104  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.830593  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.830615  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:18:45.025539  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025585  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:18:45.025596  994705 machine.go:91] provisioned docker machine in 853.613637ms
	I0830 22:18:45.025627  994705 fix.go:56] fixHost completed within 18.737351046s
	I0830 22:18:45.025637  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 18.737393499s
	W0830 22:18:45.025662  994705 start.go:672] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	W0830 22:18:45.025746  994705 out.go:239] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025760  994705 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:50.028247  994705 start.go:365] acquiring machines lock for embed-certs-208903: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:19:50.136893  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 1m0.108561967s
	I0830 22:19:50.136941  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:50.136952  994705 fix.go:54] fixHost starting: 
	I0830 22:19:50.137347  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:50.137386  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:50.156678  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0830 22:19:50.157148  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:50.157739  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:19:50.157765  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:50.158103  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:50.158283  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.158445  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:19:50.160098  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Running err=<nil>
	W0830 22:19:50.160115  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:50.162162  994705 out.go:177] * Updating the running kvm2 "embed-certs-208903" VM ...
	I0830 22:19:50.163634  994705 machine.go:88] provisioning docker machine ...
	I0830 22:19:50.163663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.163906  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164077  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:19:50.164104  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164288  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.166831  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167198  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.167234  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167371  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.167561  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167731  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167902  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.168108  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.168592  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.168610  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:19:50.306738  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:19:50.306772  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.309523  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.309929  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.309974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.310182  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.310349  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310638  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310845  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.311027  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.311610  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.311644  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:50.433972  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:50.434005  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:50.434045  994705 buildroot.go:174] setting up certificates
	I0830 22:19:50.434057  994705 provision.go:83] configureAuth start
	I0830 22:19:50.434069  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.434388  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:19:50.437450  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.437883  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.437916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.438115  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.440654  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.441059  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441213  994705 provision.go:138] copyHostCerts
	I0830 22:19:50.441271  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:50.441283  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:50.441352  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:50.441453  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:50.441462  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:50.441481  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:50.441563  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:50.441575  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:50.441606  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:50.441684  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:19:50.721978  994705 provision.go:172] copyRemoteCerts
	I0830 22:19:50.722039  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:50.722072  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.724893  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725257  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.725289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725571  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.725799  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.726014  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.726181  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:19:50.817217  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:50.843335  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:50.869233  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:50.897508  994705 provision.go:86] duration metric: configureAuth took 463.432948ms
	I0830 22:19:50.897544  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:50.897804  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:50.897904  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.900633  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.901040  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901210  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.901404  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901547  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901680  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.901875  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.902287  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.902310  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:51.128816  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.128855  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:19:51.128866  994705 machine.go:91] provisioned docker machine in 965.212906ms
	I0830 22:19:51.128900  994705 fix.go:56] fixHost completed within 991.948899ms
	I0830 22:19:51.128906  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 991.990648ms
	W0830 22:19:51.129050  994705 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p embed-certs-208903" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	* Failed to start kvm2 VM. Running "minikube delete -p embed-certs-208903" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.131823  994705 out.go:177] 
	W0830 22:19:51.133957  994705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0830 22:19:51.133985  994705 out.go:239] * 
	* 
	W0830 22:19:51.134788  994705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:19:51.136212  994705 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-208903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903: exit status 2 (276.997412ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-208903 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-208903 logs -n 25: (54.88776789s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-519738 -- sudo                         | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-519738                                 | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-184733                              | stopped-upgrade-184733       | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:09 UTC |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	| delete  | -p                                                     | disable-driver-mounts-883991 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | disable-driver-mounts-883991                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:12 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-698195             | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208903            | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-791007  | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC | 30 Aug 23 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC |                     |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-698195                  | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208903                 | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-250163        | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC | 30 Aug 23 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-791007       | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC |                     |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-250163             | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:16:59
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:16:59.758341  995603 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:16:59.758470  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758479  995603 out.go:309] Setting ErrFile to fd 2...
	I0830 22:16:59.758484  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758692  995603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:16:59.759241  995603 out.go:303] Setting JSON to false
	I0830 22:16:59.760232  995603 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14367,"bootTime":1693419453,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:16:59.760291  995603 start.go:138] virtualization: kvm guest
	I0830 22:16:59.762744  995603 out.go:177] * [old-k8s-version-250163] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:16:59.764395  995603 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:16:59.765863  995603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:16:59.764404  995603 notify.go:220] Checking for updates...
	I0830 22:16:59.767579  995603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:16:59.769244  995603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:16:59.771001  995603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:16:59.772625  995603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:16:59.774574  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:16:59.774929  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.775032  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.790271  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0830 22:16:59.790677  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.791257  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.791283  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.791645  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.791879  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.793885  995603 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:16:59.795414  995603 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:16:59.795716  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.795752  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.810316  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0830 22:16:59.810694  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.811176  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.811201  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.811560  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.811808  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.845962  995603 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:16:59.847399  995603 start.go:298] selected driver: kvm2
	I0830 22:16:59.847410  995603 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.847546  995603 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:16:59.848301  995603 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.848376  995603 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:16:59.862654  995603 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:16:59.863040  995603 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 22:16:59.863080  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:16:59.863094  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:16:59.863109  995603 start_flags.go:319] config:
	{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.863341  995603 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.865500  995603 out.go:177] * Starting control plane node old-k8s-version-250163 in cluster old-k8s-version-250163
	I0830 22:17:00.916070  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:16:59.866763  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:16:59.866836  995603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0830 22:16:59.866852  995603 cache.go:57] Caching tarball of preloaded images
	I0830 22:16:59.866935  995603 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:16:59.866946  995603 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0830 22:16:59.867091  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:16:59.867314  995603 start.go:365] acquiring machines lock for old-k8s-version-250163: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:17:06.996025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:10.068036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:16.148043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:19.220024  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:25.300036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:28.372088  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:34.452043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:37.524037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:43.604037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:46.676107  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:52.756100  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:55.828195  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:01.908025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:04.980079  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:11.060035  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:14.132025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:20.212050  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:23.283995  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:26.288205  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 4m29.4670209s
	I0830 22:18:26.288261  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:26.288276  994705 fix.go:54] fixHost starting: 
	I0830 22:18:26.288621  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:26.288656  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:26.304048  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0830 22:18:26.304613  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:26.305138  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:18:26.305164  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:26.305518  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:26.305719  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:26.305843  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:18:26.307597  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Stopped err=<nil>
	I0830 22:18:26.307639  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	W0830 22:18:26.307827  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:26.309985  994705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208903" ...
	I0830 22:18:26.311551  994705 main.go:141] libmachine: (embed-certs-208903) Calling .Start
	I0830 22:18:26.311750  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring networks are active...
	I0830 22:18:26.312528  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network default is active
	I0830 22:18:26.312814  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network mk-embed-certs-208903 is active
	I0830 22:18:26.313153  994705 main.go:141] libmachine: (embed-certs-208903) Getting domain xml...
	I0830 22:18:26.313857  994705 main.go:141] libmachine: (embed-certs-208903) Creating domain...
	I0830 22:18:26.285881  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:26.285939  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:18:26.288013  994624 machine.go:91] provisioned docker machine in 4m37.410947228s
	I0830 22:18:26.288063  994624 fix.go:56] fixHost completed within 4m37.432260867s
	I0830 22:18:26.288085  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 4m37.432330775s
	W0830 22:18:26.288107  994624 start.go:672] error starting host: provision: host is not running
	W0830 22:18:26.288219  994624 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0830 22:18:26.288225  994624 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:27.529120  994705 main.go:141] libmachine: (embed-certs-208903) Waiting to get IP...
	I0830 22:18:27.530028  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.530390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.530515  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.530404  996319 retry.go:31] will retry after 311.351139ms: waiting for machine to come up
	I0830 22:18:27.843013  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.843398  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.843427  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.843337  996319 retry.go:31] will retry after 367.953943ms: waiting for machine to come up
	I0830 22:18:28.213214  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.213785  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.213820  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.213722  996319 retry.go:31] will retry after 424.275825ms: waiting for machine to come up
	I0830 22:18:28.639216  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.639670  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.639707  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.639609  996319 retry.go:31] will retry after 502.321201ms: waiting for machine to come up
	I0830 22:18:29.143240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.143823  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.143850  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.143790  996319 retry.go:31] will retry after 680.495047ms: waiting for machine to come up
	I0830 22:18:29.825462  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.825879  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.825904  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.825836  996319 retry.go:31] will retry after 756.63617ms: waiting for machine to come up
	I0830 22:18:30.583723  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:30.584179  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:30.584212  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:30.584118  996319 retry.go:31] will retry after 851.722792ms: waiting for machine to come up
	I0830 22:18:31.437603  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:31.438031  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:31.438063  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:31.437986  996319 retry.go:31] will retry after 1.214893807s: waiting for machine to come up
	I0830 22:18:31.289961  994624 start.go:365] acquiring machines lock for no-preload-698195: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:32.654351  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:32.654803  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:32.654829  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:32.654756  996319 retry.go:31] will retry after 1.574180335s: waiting for machine to come up
	I0830 22:18:34.231491  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:34.231911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:34.231944  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:34.231826  996319 retry.go:31] will retry after 1.99107048s: waiting for machine to come up
	I0830 22:18:36.225911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:36.226336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:36.226363  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:36.226283  996319 retry.go:31] will retry after 1.816508761s: waiting for machine to come up
	I0830 22:18:38.044672  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:38.045061  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:38.045094  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:38.045021  996319 retry.go:31] will retry after 2.343148299s: waiting for machine to come up
	I0830 22:18:40.389346  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:40.389753  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:40.389778  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:40.389700  996319 retry.go:31] will retry after 3.682098761s: waiting for machine to come up
	I0830 22:18:45.025750  995192 start.go:369] acquired machines lock for "default-k8s-diff-port-791007" in 3m32.939054887s
	I0830 22:18:45.025823  995192 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:45.025847  995192 fix.go:54] fixHost starting: 
	I0830 22:18:45.026291  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:45.026333  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:45.041161  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0830 22:18:45.041657  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:45.042176  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:18:45.042208  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:45.042544  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:45.042748  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:18:45.042910  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:18:45.044428  995192 fix.go:102] recreateIfNeeded on default-k8s-diff-port-791007: state=Stopped err=<nil>
	I0830 22:18:45.044454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	W0830 22:18:45.044615  995192 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:45.046538  995192 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-791007" ...
	I0830 22:18:44.074916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075386  994705 main.go:141] libmachine: (embed-certs-208903) Found IP for machine: 192.168.50.159
	I0830 22:18:44.075411  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has current primary IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075418  994705 main.go:141] libmachine: (embed-certs-208903) Reserving static IP address...
	I0830 22:18:44.075899  994705 main.go:141] libmachine: (embed-certs-208903) Reserved static IP address: 192.168.50.159
	I0830 22:18:44.075928  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.075939  994705 main.go:141] libmachine: (embed-certs-208903) Waiting for SSH to be available...
	I0830 22:18:44.075959  994705 main.go:141] libmachine: (embed-certs-208903) DBG | skip adding static IP to network mk-embed-certs-208903 - found existing host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"}
	I0830 22:18:44.075968  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Getting to WaitForSSH function...
	I0830 22:18:44.078068  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.078436  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH client type: external
	I0830 22:18:44.078533  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa (-rw-------)
	I0830 22:18:44.078569  994705 main.go:141] libmachine: (embed-certs-208903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:18:44.078590  994705 main.go:141] libmachine: (embed-certs-208903) DBG | About to run SSH command:
	I0830 22:18:44.078622  994705 main.go:141] libmachine: (embed-certs-208903) DBG | exit 0
	I0830 22:18:44.167514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | SSH cmd err, output: <nil>: 
	I0830 22:18:44.167898  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetConfigRaw
	I0830 22:18:44.168594  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.170974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.171370  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171696  994705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/embed-certs-208903/config.json ...
	I0830 22:18:44.171967  994705 machine.go:88] provisioning docker machine ...
	I0830 22:18:44.171989  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:44.172184  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172371  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:18:44.172397  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172563  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.174522  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174861  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.174894  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174988  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.175159  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175286  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175413  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.175627  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.176111  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.176132  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:18:44.309192  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:18:44.309225  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.311931  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312327  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.312362  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312512  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.312727  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.312919  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.313048  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.313215  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.313623  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.313638  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:18:44.440529  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:44.440594  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:18:44.440641  994705 buildroot.go:174] setting up certificates
	I0830 22:18:44.440653  994705 provision.go:83] configureAuth start
	I0830 22:18:44.440663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.440943  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.443289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443663  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.443705  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443805  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.445987  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446297  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.446328  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446462  994705 provision.go:138] copyHostCerts
	I0830 22:18:44.446524  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:18:44.446550  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:18:44.446638  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:18:44.446750  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:18:44.446763  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:18:44.446800  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:18:44.446907  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:18:44.446919  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:18:44.446955  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:18:44.447036  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:18:44.664313  994705 provision.go:172] copyRemoteCerts
	I0830 22:18:44.664387  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:18:44.664434  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.666819  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667160  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.667192  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667338  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.667565  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.667687  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.667839  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:18:44.756922  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:18:44.780430  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:18:44.803396  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:18:44.825975  994705 provision.go:86] duration metric: configureAuth took 385.307932ms
	I0830 22:18:44.826006  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:18:44.826230  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:18:44.826334  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.828862  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829199  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.829240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829383  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.829606  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829770  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829907  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.830104  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.830593  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.830615  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:18:45.025539  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025585  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:18:45.025596  994705 machine.go:91] provisioned docker machine in 853.613637ms
	I0830 22:18:45.025627  994705 fix.go:56] fixHost completed within 18.737351046s
	I0830 22:18:45.025637  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 18.737393499s
	W0830 22:18:45.025662  994705 start.go:672] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	W0830 22:18:45.025746  994705 out.go:239] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025760  994705 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:45.047821  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Start
	I0830 22:18:45.047982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring networks are active...
	I0830 22:18:45.048684  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network default is active
	I0830 22:18:45.049040  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network mk-default-k8s-diff-port-791007 is active
	I0830 22:18:45.049401  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Getting domain xml...
	I0830 22:18:45.050009  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Creating domain...
	I0830 22:18:46.288943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting to get IP...
	I0830 22:18:46.289982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290359  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.290388  996430 retry.go:31] will retry after 228.105709ms: waiting for machine to come up
	I0830 22:18:46.519862  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520369  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.520342  996430 retry.go:31] will retry after 343.008473ms: waiting for machine to come up
	I0830 22:18:46.865023  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865426  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.865385  996430 retry.go:31] will retry after 467.017605ms: waiting for machine to come up
	I0830 22:18:50.028247  994705 start.go:365] acquiring machines lock for embed-certs-208903: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:47.334027  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334655  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.334600  996430 retry.go:31] will retry after 601.952764ms: waiting for machine to come up
	I0830 22:18:47.937980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.938387  996430 retry.go:31] will retry after 556.18277ms: waiting for machine to come up
	I0830 22:18:48.495747  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496130  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:48.496101  996430 retry.go:31] will retry after 696.126701ms: waiting for machine to come up
	I0830 22:18:49.193405  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193789  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193822  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:49.193752  996430 retry.go:31] will retry after 1.123021492s: waiting for machine to come up
	I0830 22:18:50.318326  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318710  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:50.318637  996430 retry.go:31] will retry after 1.198520166s: waiting for machine to come up
	I0830 22:18:51.518959  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519302  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519332  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:51.519244  996430 retry.go:31] will retry after 1.851352392s: waiting for machine to come up
	I0830 22:18:53.373208  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373676  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373713  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:53.373594  996430 retry.go:31] will retry after 1.789163964s: waiting for machine to come up
	I0830 22:18:55.164132  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164664  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:55.164587  996430 retry.go:31] will retry after 2.037803279s: waiting for machine to come up
	I0830 22:18:57.204503  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204957  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204984  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:57.204919  996430 retry.go:31] will retry after 3.365492251s: waiting for machine to come up
	I0830 22:19:00.572195  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:19:00.572533  996430 retry.go:31] will retry after 3.57478782s: waiting for machine to come up
	I0830 22:19:05.536665  995603 start.go:369] acquired machines lock for "old-k8s-version-250163" in 2m5.669275373s
	I0830 22:19:05.536730  995603 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:05.536751  995603 fix.go:54] fixHost starting: 
	I0830 22:19:05.537197  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:05.537240  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:05.556581  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0830 22:19:05.557016  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:05.557559  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:19:05.557590  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:05.557937  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:05.558124  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:05.558290  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:19:05.559829  995603 fix.go:102] recreateIfNeeded on old-k8s-version-250163: state=Stopped err=<nil>
	I0830 22:19:05.559871  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	W0830 22:19:05.560056  995603 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:05.562726  995603 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-250163" ...
	I0830 22:19:04.151280  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.151787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Found IP for machine: 192.168.61.104
	I0830 22:19:04.151820  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserving static IP address...
	I0830 22:19:04.151839  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has current primary IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.152254  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.152286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserved static IP address: 192.168.61.104
	I0830 22:19:04.152306  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | skip adding static IP to network mk-default-k8s-diff-port-791007 - found existing host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"}
	I0830 22:19:04.152324  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for SSH to be available...
	I0830 22:19:04.152339  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Getting to WaitForSSH function...
	I0830 22:19:04.154335  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.154701  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154791  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH client type: external
	I0830 22:19:04.154833  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa (-rw-------)
	I0830 22:19:04.154852  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:04.154868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | About to run SSH command:
	I0830 22:19:04.154879  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | exit 0
	I0830 22:19:04.251692  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:04.252182  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetConfigRaw
	I0830 22:19:04.252842  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.255184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255536  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.255571  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255850  995192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/config.json ...
	I0830 22:19:04.256118  995192 machine.go:88] provisioning docker machine ...
	I0830 22:19:04.256143  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:04.256344  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256504  995192 buildroot.go:166] provisioning hostname "default-k8s-diff-port-791007"
	I0830 22:19:04.256525  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256653  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.259010  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259366  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.259389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259509  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.259667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.260115  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.260787  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.260810  995192 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-791007 && echo "default-k8s-diff-port-791007" | sudo tee /etc/hostname
	I0830 22:19:04.403123  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-791007
	
	I0830 22:19:04.403166  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.405835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406219  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.406270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406476  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.406704  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.406892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.407047  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.407233  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.407634  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.407658  995192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-791007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-791007/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-791007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:04.549964  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:04.550002  995192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:04.550039  995192 buildroot.go:174] setting up certificates
	I0830 22:19:04.550053  995192 provision.go:83] configureAuth start
	I0830 22:19:04.550071  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.550422  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.552844  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553116  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.553150  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553313  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.555514  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.555880  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.555917  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.556036  995192 provision.go:138] copyHostCerts
	I0830 22:19:04.556100  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:04.556133  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:04.556213  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:04.556343  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:04.556354  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:04.556392  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:04.556485  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:04.556496  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:04.556528  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:04.556607  995192 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-791007 san=[192.168.61.104 192.168.61.104 localhost 127.0.0.1 minikube default-k8s-diff-port-791007]
	I0830 22:19:04.756354  995192 provision.go:172] copyRemoteCerts
	I0830 22:19:04.756413  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:04.756438  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.759134  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759511  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.759544  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759739  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.759977  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.760153  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.760297  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:04.858949  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:04.882455  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0830 22:19:04.905659  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:04.929876  995192 provision.go:86] duration metric: configureAuth took 379.794026ms
	I0830 22:19:04.929905  995192 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:04.930124  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:04.930228  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.932799  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933159  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.933192  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933316  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.933531  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933703  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.934015  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.934606  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.934633  995192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:05.266317  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:05.266349  995192 machine.go:91] provisioned docker machine in 1.010213866s
	I0830 22:19:05.266363  995192 start.go:300] post-start starting for "default-k8s-diff-port-791007" (driver="kvm2")
	I0830 22:19:05.266378  995192 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:05.266402  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.266764  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:05.266802  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.269938  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270300  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.270345  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270472  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.270650  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.270800  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.270922  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.365334  995192 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:05.369583  995192 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:05.369608  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:05.369701  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:05.369790  995192 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:05.369879  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:05.377933  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:05.401027  995192 start.go:303] post-start completed in 134.648062ms
	I0830 22:19:05.401051  995192 fix.go:56] fixHost completed within 20.37520461s
	I0830 22:19:05.401079  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.404156  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.404629  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404765  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.404960  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405138  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405260  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.405463  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:05.405917  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:05.405930  995192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:05.536449  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433945.485000324
	
	I0830 22:19:05.536479  995192 fix.go:206] guest clock: 1693433945.485000324
	I0830 22:19:05.536490  995192 fix.go:219] Guest: 2023-08-30 22:19:05.485000324 +0000 UTC Remote: 2023-08-30 22:19:05.401056033 +0000 UTC m=+233.468479321 (delta=83.944291ms)
	I0830 22:19:05.536524  995192 fix.go:190] guest clock delta is within tolerance: 83.944291ms
	I0830 22:19:05.536535  995192 start.go:83] releasing machines lock for "default-k8s-diff-port-791007", held for 20.510742441s
	I0830 22:19:05.536569  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.536868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:05.539651  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540017  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.540057  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540196  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540737  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540911  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540975  995192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:05.541036  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.541133  995192 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:05.541172  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.543846  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.543892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544250  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544317  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544411  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544540  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544627  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544707  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544792  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544865  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544926  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.544972  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.677442  995192 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:05.683243  995192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:05.832776  995192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:05.838924  995192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:05.839000  995192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:05.857231  995192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:05.857251  995192 start.go:466] detecting cgroup driver to use...
	I0830 22:19:05.857349  995192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:05.875107  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:05.888540  995192 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:05.888603  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:05.901129  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:05.914011  995192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:06.015763  995192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:06.144950  995192 docker.go:212] disabling docker service ...
	I0830 22:19:06.145052  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:06.159373  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:06.172560  995192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:06.279514  995192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:06.413719  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:06.427047  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:06.443765  995192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:06.443853  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.452621  995192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:06.452690  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.461365  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.470052  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.478685  995192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:06.487763  995192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:06.495483  995192 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:06.495551  995192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:06.508009  995192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:06.516397  995192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:06.615209  995192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:06.792388  995192 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:06.792466  995192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:06.798170  995192 start.go:534] Will wait 60s for crictl version
	I0830 22:19:06.798231  995192 ssh_runner.go:195] Run: which crictl
	I0830 22:19:06.801828  995192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:06.842351  995192 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:06.842459  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.898609  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.962179  995192 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:06.963711  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:06.966803  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967189  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:06.967225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967412  995192 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:06.972033  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:05.564313  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Start
	I0830 22:19:05.564511  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring networks are active...
	I0830 22:19:05.565235  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network default is active
	I0830 22:19:05.565567  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network mk-old-k8s-version-250163 is active
	I0830 22:19:05.565954  995603 main.go:141] libmachine: (old-k8s-version-250163) Getting domain xml...
	I0830 22:19:05.566644  995603 main.go:141] libmachine: (old-k8s-version-250163) Creating domain...
	I0830 22:19:06.869485  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting to get IP...
	I0830 22:19:06.870595  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:06.871071  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:06.871133  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:06.871046  996542 retry.go:31] will retry after 294.811471ms: waiting for machine to come up
	I0830 22:19:07.167657  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.168126  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.168172  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.168099  996542 retry.go:31] will retry after 376.474639ms: waiting for machine to come up
	I0830 22:19:07.546876  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.547389  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.547419  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.547354  996542 retry.go:31] will retry after 329.757182ms: waiting for machine to come up
	I0830 22:19:07.878995  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.879572  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.879601  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.879529  996542 retry.go:31] will retry after 567.335814ms: waiting for machine to come up
	I0830 22:19:08.448373  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.448996  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.449028  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.448958  996542 retry.go:31] will retry after 510.216093ms: waiting for machine to come up
	I0830 22:19:08.960855  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.961412  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.961451  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.961326  996542 retry.go:31] will retry after 688.575912ms: waiting for machine to come up
	I0830 22:19:09.651966  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:09.652379  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:09.652411  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:09.652346  996542 retry.go:31] will retry after 1.130912238s: waiting for machine to come up
	I0830 22:19:06.984632  995192 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:06.984698  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:07.020200  995192 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:07.020282  995192 ssh_runner.go:195] Run: which lz4
	I0830 22:19:07.024254  995192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:07.028470  995192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:07.028508  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 22:19:08.986852  995192 crio.go:444] Took 1.962647 seconds to copy over tarball
	I0830 22:19:08.986915  995192 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:10.784839  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:10.785424  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:10.785456  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:10.785355  996542 retry.go:31] will retry after 898.98114ms: waiting for machine to come up
	I0830 22:19:11.685890  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:11.686614  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:11.686646  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:11.686558  996542 retry.go:31] will retry after 1.621086004s: waiting for machine to come up
	I0830 22:19:13.310234  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:13.310696  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:13.310721  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:13.310630  996542 retry.go:31] will retry after 1.652651656s: waiting for machine to come up
	I0830 22:19:12.113071  995192 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.126115747s)
	I0830 22:19:12.113107  995192 crio.go:451] Took 3.126230 seconds to extract the tarball
	I0830 22:19:12.113120  995192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:12.156320  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:12.200547  995192 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:19:12.200573  995192 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:19:12.200652  995192 ssh_runner.go:195] Run: crio config
	I0830 22:19:12.273153  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:12.273180  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:12.273205  995192 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:12.273231  995192 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-791007 NodeName:default-k8s-diff-port-791007 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:19:12.273413  995192 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-791007"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:12.273497  995192 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-791007 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0830 22:19:12.273573  995192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:19:12.283536  995192 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:12.283609  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:12.292260  995192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0830 22:19:12.309407  995192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:12.325757  995192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0830 22:19:12.342664  995192 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:12.346459  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:12.358721  995192 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007 for IP: 192.168.61.104
	I0830 22:19:12.358797  995192 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:12.359010  995192 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:12.359066  995192 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:12.359147  995192 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.key
	I0830 22:19:12.359219  995192 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key.a202b4d9
	I0830 22:19:12.359255  995192 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key
	I0830 22:19:12.359363  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:12.359390  995192 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:12.359400  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:12.359424  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:12.359449  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:12.359471  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:12.359507  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:12.360328  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:12.385275  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0830 22:19:12.410697  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:12.434240  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0830 22:19:12.457206  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:12.484695  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:12.507670  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:12.531114  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:12.554501  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:12.579425  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:12.603211  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:12.628506  995192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:12.645536  995192 ssh_runner.go:195] Run: openssl version
	I0830 22:19:12.650882  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:12.660449  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665173  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665239  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.670785  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:12.681196  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:12.690775  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695204  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695262  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.700668  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:12.710205  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:12.719691  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724744  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724803  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.730472  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:12.740194  995192 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:12.744773  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:12.750633  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:12.756228  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:12.762258  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:12.767895  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:12.773716  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:12.779716  995192 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-791007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:12.779849  995192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:12.779895  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:12.808983  995192 cri.go:89] found id: ""
	I0830 22:19:12.809055  995192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:12.818188  995192 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:12.818208  995192 kubeadm.go:636] restartCluster start
	I0830 22:19:12.818258  995192 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:12.829333  995192 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.830440  995192 kubeconfig.go:92] found "default-k8s-diff-port-791007" server: "https://192.168.61.104:8444"
	I0830 22:19:12.833172  995192 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:12.841419  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.841468  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.852072  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.852092  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.852135  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.862195  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.362894  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.362981  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.374932  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.862450  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.862558  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.874629  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.363249  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.363368  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.375071  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.862656  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.862767  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.874077  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.363282  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.363389  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.374762  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.862279  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.862375  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.873942  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.362457  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.362554  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.373922  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.862336  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.862415  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.873540  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.964585  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:14.965020  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:14.965042  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:14.964995  996542 retry.go:31] will retry after 1.89297354s: waiting for machine to come up
	I0830 22:19:16.859309  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:16.859825  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:16.859852  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:16.859777  996542 retry.go:31] will retry after 2.908196896s: waiting for machine to come up
	I0830 22:19:17.363243  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.363347  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.378177  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:17.862706  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.862785  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.877394  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.363052  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.363183  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.377397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.862918  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.862995  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.878397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.362972  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.363052  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.374591  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.863153  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.863233  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.878572  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.362613  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.362703  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.374006  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.862535  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.862634  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.874066  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.362612  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.362721  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.375262  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.863011  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.863113  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.874498  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.771969  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:19.772453  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:19.772482  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:19.772410  996542 retry.go:31] will retry after 3.967899631s: waiting for machine to come up
	I0830 22:19:23.743741  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744344  995603 main.go:141] libmachine: (old-k8s-version-250163) Found IP for machine: 192.168.39.10
	I0830 22:19:23.744371  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserving static IP address...
	I0830 22:19:23.744387  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has current primary IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744827  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.744860  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserved static IP address: 192.168.39.10
	I0830 22:19:23.744877  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | skip adding static IP to network mk-old-k8s-version-250163 - found existing host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"}
	I0830 22:19:23.744904  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Getting to WaitForSSH function...
	I0830 22:19:23.744920  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting for SSH to be available...
	I0830 22:19:23.747285  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747642  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.747676  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747864  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH client type: external
	I0830 22:19:23.747896  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa (-rw-------)
	I0830 22:19:23.747935  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:23.747954  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | About to run SSH command:
	I0830 22:19:23.747971  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | exit 0
	I0830 22:19:23.836434  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:23.837035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetConfigRaw
	I0830 22:19:23.837845  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:23.840648  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.841088  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841433  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:19:23.841663  995603 machine.go:88] provisioning docker machine ...
	I0830 22:19:23.841688  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:23.841895  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842049  995603 buildroot.go:166] provisioning hostname "old-k8s-version-250163"
	I0830 22:19:23.842069  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842291  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.844953  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845376  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.845408  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845678  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.845885  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846186  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.846361  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.846839  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.846861  995603 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-250163 && echo "old-k8s-version-250163" | sudo tee /etc/hostname
	I0830 22:19:23.981507  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-250163
	
	I0830 22:19:23.981556  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.984891  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.985249  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985369  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.985604  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.985811  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.986000  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.986199  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.986603  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.986620  995603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-250163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-250163/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-250163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:24.115894  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:24.115952  995603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:24.115985  995603 buildroot.go:174] setting up certificates
	I0830 22:19:24.115996  995603 provision.go:83] configureAuth start
	I0830 22:19:24.116014  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:24.116342  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:24.118887  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119266  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.119312  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119572  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.122166  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122551  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.122590  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122700  995603 provision.go:138] copyHostCerts
	I0830 22:19:24.122769  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:24.122793  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:24.122868  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:24.122989  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:24.123004  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:24.123035  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:24.123168  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:24.123184  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:24.123217  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:24.123302  995603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-250163 san=[192.168.39.10 192.168.39.10 localhost 127.0.0.1 minikube old-k8s-version-250163]
	I0830 22:19:24.303093  995603 provision.go:172] copyRemoteCerts
	I0830 22:19:24.303156  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:24.303182  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.305900  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306173  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.306199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306352  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.306545  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.306728  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.306873  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.393858  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:24.418791  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:19:24.441090  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:24.462926  995603 provision.go:86] duration metric: configureAuth took 346.913079ms
	I0830 22:19:24.462952  995603 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:24.463136  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:19:24.463224  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.465978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466321  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.466357  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466559  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.466785  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.466934  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.467035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.467173  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.467657  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.467676  995603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:25.058077  994624 start.go:369] acquired machines lock for "no-preload-698195" in 53.768050843s
	I0830 22:19:25.058128  994624 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:25.058141  994624 fix.go:54] fixHost starting: 
	I0830 22:19:25.058564  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:25.058603  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:25.076580  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0830 22:19:25.077082  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:25.077788  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:19:25.077824  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:25.078214  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:25.078418  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:25.078695  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:19:25.080411  994624 fix.go:102] recreateIfNeeded on no-preload-698195: state=Stopped err=<nil>
	I0830 22:19:25.080447  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	W0830 22:19:25.080636  994624 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:25.082566  994624 out.go:177] * Restarting existing kvm2 VM for "no-preload-698195" ...
	I0830 22:19:24.795523  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:24.795562  995603 machine.go:91] provisioned docker machine in 953.87669ms
	I0830 22:19:24.795575  995603 start.go:300] post-start starting for "old-k8s-version-250163" (driver="kvm2")
	I0830 22:19:24.795590  995603 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:24.795616  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:24.795984  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:24.796046  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.799136  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799534  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.799561  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799797  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.799996  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.800210  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.800396  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.890335  995603 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:24.894780  995603 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:24.894807  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:24.894890  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:24.894986  995603 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:24.895110  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:24.907259  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:24.932802  995603 start.go:303] post-start completed in 137.211475ms
	I0830 22:19:24.932829  995603 fix.go:56] fixHost completed within 19.396077949s
	I0830 22:19:24.932858  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.935762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936118  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.936160  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936310  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.936538  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936721  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936918  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.937109  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.937748  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.937767  995603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:25.057876  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433965.004650095
	
	I0830 22:19:25.057911  995603 fix.go:206] guest clock: 1693433965.004650095
	I0830 22:19:25.057924  995603 fix.go:219] Guest: 2023-08-30 22:19:25.004650095 +0000 UTC Remote: 2023-08-30 22:19:24.932833395 +0000 UTC m=+145.224486267 (delta=71.8167ms)
	I0830 22:19:25.057987  995603 fix.go:190] guest clock delta is within tolerance: 71.8167ms
	I0830 22:19:25.057998  995603 start.go:83] releasing machines lock for "old-k8s-version-250163", held for 19.521294969s
	I0830 22:19:25.058036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.058351  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:25.061325  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061749  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.061782  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061965  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062635  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062921  995603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:25.062977  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.063084  995603 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:25.063119  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.065978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066217  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066375  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066428  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066620  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066668  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066784  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066806  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.066953  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.067142  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067206  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067278  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.067389  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.181017  995603 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:25.188428  995603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:25.337310  995603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:25.346144  995603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:25.346231  995603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:25.368931  995603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:25.368966  995603 start.go:466] detecting cgroup driver to use...
	I0830 22:19:25.369048  995603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:25.383524  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:25.399296  995603 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:25.399365  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:25.416387  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:25.430426  995603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:25.552861  995603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:25.699278  995603 docker.go:212] disabling docker service ...
	I0830 22:19:25.699350  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:25.718108  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:25.736420  995603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:25.871165  995603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:25.993674  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:26.009215  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:26.027014  995603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0830 22:19:26.027122  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.038902  995603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:26.038985  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.051908  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.062635  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.073049  995603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:26.086514  995603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:26.098352  995603 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:26.098405  995603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:26.117326  995603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:26.129854  995603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:26.259656  995603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:26.476938  995603 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:26.477034  995603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:26.482773  995603 start.go:534] Will wait 60s for crictl version
	I0830 22:19:26.482841  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:26.486853  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:26.525498  995603 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:26.525595  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.585226  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.641386  995603 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0830 22:19:22.362364  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:22.362448  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:22.373701  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:22.842449  995192 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:22.842531  995192 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:22.842551  995192 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:22.842623  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:22.871557  995192 cri.go:89] found id: ""
	I0830 22:19:22.871624  995192 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:22.886295  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:22.894486  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:22.894549  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902556  995192 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902578  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.017775  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.631493  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.831074  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.923222  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.994499  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:23.994583  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.007515  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.519195  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.019167  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.519068  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.019708  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.519664  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.547751  995192 api_server.go:72] duration metric: took 2.553248139s to wait for apiserver process to appear ...
	I0830 22:19:26.547794  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:26.547816  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:25.084008  994624 main.go:141] libmachine: (no-preload-698195) Calling .Start
	I0830 22:19:25.084189  994624 main.go:141] libmachine: (no-preload-698195) Ensuring networks are active...
	I0830 22:19:25.085011  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network default is active
	I0830 22:19:25.085319  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network mk-no-preload-698195 is active
	I0830 22:19:25.085676  994624 main.go:141] libmachine: (no-preload-698195) Getting domain xml...
	I0830 22:19:25.086427  994624 main.go:141] libmachine: (no-preload-698195) Creating domain...
	I0830 22:19:26.443042  994624 main.go:141] libmachine: (no-preload-698195) Waiting to get IP...
	I0830 22:19:26.444179  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.444691  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.444784  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.444686  996676 retry.go:31] will retry after 208.17912ms: waiting for machine to come up
	I0830 22:19:26.654132  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.654621  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.654651  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.654581  996676 retry.go:31] will retry after 304.249592ms: waiting for machine to come up
	I0830 22:19:26.960205  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.960990  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.961014  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.960912  996676 retry.go:31] will retry after 342.108913ms: waiting for machine to come up
	I0830 22:19:27.304766  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.305661  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.305700  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.305602  996676 retry.go:31] will retry after 500.147687ms: waiting for machine to come up
	I0830 22:19:27.808375  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.808867  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.808884  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.808796  996676 retry.go:31] will retry after 562.543443ms: waiting for machine to come up
	I0830 22:19:28.373420  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:28.373912  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:28.373938  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:28.373863  996676 retry.go:31] will retry after 755.787662ms: waiting for machine to come up
	I0830 22:19:26.642985  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:26.646304  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646712  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:26.646773  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646957  995603 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:26.652439  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:26.667339  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:19:26.667418  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:26.703670  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:26.703750  995603 ssh_runner.go:195] Run: which lz4
	I0830 22:19:26.708087  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:26.712329  995603 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:26.712362  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0830 22:19:28.602303  995603 crio.go:444] Took 1.894253 seconds to copy over tarball
	I0830 22:19:28.602408  995603 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:30.838763  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.838807  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:30.838824  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:30.908950  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.908987  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:31.409372  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.420411  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.420480  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:31.909095  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.916778  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.916813  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:29.130983  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:29.131530  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:29.131565  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:29.131459  996676 retry.go:31] will retry after 951.657872ms: waiting for machine to come up
	I0830 22:19:30.084853  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:30.085280  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:30.085306  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:30.085247  996676 retry.go:31] will retry after 1.469099841s: waiting for machine to come up
	I0830 22:19:31.556432  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:31.556893  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:31.556918  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:31.556809  996676 retry.go:31] will retry after 1.217757948s: waiting for machine to come up
	I0830 22:19:32.775796  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:32.776120  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:32.776152  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:32.776080  996676 retry.go:31] will retry after 2.032727742s: waiting for machine to come up
	I0830 22:19:31.859924  995603 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.257478408s)
	I0830 22:19:31.859957  995603 crio.go:451] Took 3.257622 seconds to extract the tarball
	I0830 22:19:31.859970  995603 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:31.917027  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:31.965752  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:31.965777  995603 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:31.965886  995603 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.965944  995603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.965980  995603 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.965879  995603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.966084  995603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.965878  995603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.965967  995603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:31.965901  995603 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968024  995603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.968045  995603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.968079  995603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.968186  995603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.968191  995603 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.968193  995603 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968248  995603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.968766  995603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.140478  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.140975  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.157997  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0830 22:19:32.159468  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.159950  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.160033  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.161682  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.255481  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:32.261235  995603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0830 22:19:32.261291  995603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.261340  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.282724  995603 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0830 22:19:32.282781  995603 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.282854  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378268  995603 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0830 22:19:32.378372  995603 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0830 22:19:32.378417  995603 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0830 22:19:32.378507  995603 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.378551  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378377  995603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0830 22:19:32.378578  995603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0830 22:19:32.378591  995603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.378600  995603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.378295  995603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378632  995603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.378439  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378657  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.468864  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.468935  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.469002  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.469032  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.469123  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.469183  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0830 22:19:32.469184  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.563508  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0830 22:19:32.563630  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0830 22:19:32.586962  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0830 22:19:32.587044  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0830 22:19:32.587059  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0830 22:19:32.587115  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0830 22:19:32.587208  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.587265  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0830 22:19:32.592221  995603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0830 22:19:32.592246  995603 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.592300  995603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0830 22:19:34.254194  995603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.661863162s)
	I0830 22:19:34.254235  995603 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0830 22:19:34.254281  995603 cache_images.go:92] LoadImages completed in 2.288490025s
	W0830 22:19:34.254418  995603 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0830 22:19:34.254514  995603 ssh_runner.go:195] Run: crio config
	I0830 22:19:34.338842  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.338876  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.338903  995603 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:34.338929  995603 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-250163 NodeName:old-k8s-version-250163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0830 22:19:34.339134  995603 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-250163"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-250163
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.10:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:34.339240  995603 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-250163 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:19:34.339313  995603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0830 22:19:34.348990  995603 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:34.349076  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:34.358084  995603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0830 22:19:34.376989  995603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:34.396552  995603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0830 22:19:34.416666  995603 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:34.421910  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:34.436393  995603 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163 for IP: 192.168.39.10
	I0830 22:19:34.436490  995603 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:34.436717  995603 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:34.436774  995603 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:34.436867  995603 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.key
	I0830 22:19:34.436944  995603 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key.713efbbe
	I0830 22:19:34.437006  995603 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key
	I0830 22:19:34.437140  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:34.437187  995603 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:34.437203  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:34.437249  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:34.437284  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:34.437320  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:34.437388  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:34.438079  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:34.470943  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:19:34.503477  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:34.533783  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:19:34.562423  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:34.594418  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:34.625417  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:34.657444  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:34.689407  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:34.719004  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:34.745856  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:32.410110  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.418241  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.418269  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:32.910053  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.915839  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.915870  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.410086  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.488115  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.488161  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.909647  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.915252  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.915284  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.409978  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.418957  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:34.418995  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.909561  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.925400  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:19:34.938760  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:19:34.938793  995192 api_server.go:131] duration metric: took 8.390990557s to wait for apiserver health ...
	I0830 22:19:34.938804  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.938813  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.941052  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:34.942805  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:34.967544  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:34.998450  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:35.012600  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:35.012681  995192 system_pods.go:61] "coredns-5dd5756b68-992p2" [83ad338b-0338-45c3-a5ed-f772d100046b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:35.012702  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [4ed4f652-47c4-4d79-b8a8-dd0cc778bce0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:19:35.012714  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [c01b9dfc-ad6f-4348-abf0-fde4a64bfa98] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:19:35.012732  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [94cbccaf-3d5a-480c-8ee0-b8af5030909d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:19:35.012748  995192 system_pods.go:61] "kube-proxy-vckmf" [03f05466-f99b-4803-9164-233bfb9e7bb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:19:35.012760  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [2c5e190d-c93b-400a-8538-e31cc2016cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:19:35.012774  995192 system_pods.go:61] "metrics-server-57f55c9bc5-p8pp2" [4eaff1be-4258-427b-a110-47dabbffecee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:19:35.012788  995192 system_pods.go:61] "storage-provisioner" [8db3da8b-8256-405d-8d9c-79fdb6da8ab2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:19:35.012800  995192 system_pods.go:74] duration metric: took 14.324835ms to wait for pod list to return data ...
	I0830 22:19:35.012814  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:35.024186  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:35.024216  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:35.024229  995192 node_conditions.go:105] duration metric: took 11.409776ms to run NodePressure ...
	I0830 22:19:35.024284  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:35.318824  995192 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324484  995192 kubeadm.go:787] kubelet initialised
	I0830 22:19:35.324512  995192 kubeadm.go:788] duration metric: took 5.656923ms waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324525  995192 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:19:35.334137  995192 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:34.810276  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:34.810797  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:34.810836  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:34.810732  996676 retry.go:31] will retry after 2.550508742s: waiting for machine to come up
	I0830 22:19:37.364002  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:37.364550  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:37.364582  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:37.364489  996676 retry.go:31] will retry after 2.230782644s: waiting for machine to come up
	I0830 22:19:34.771235  995603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:34.787672  995603 ssh_runner.go:195] Run: openssl version
	I0830 22:19:34.793400  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:34.803208  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808108  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808166  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.814296  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:34.824791  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:34.838527  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844726  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844789  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.852442  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:34.862510  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:34.875456  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880581  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880702  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.886591  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:34.897133  995603 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:34.902292  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:34.908905  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:34.915276  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:34.921204  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:34.927878  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:34.934091  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:34.940851  995603 kubeadm.go:404] StartCluster: {Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:34.940966  995603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:34.941036  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:34.978950  995603 cri.go:89] found id: ""
	I0830 22:19:34.979038  995603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:34.988290  995603 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:34.988324  995603 kubeadm.go:636] restartCluster start
	I0830 22:19:34.988403  995603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:34.998277  995603 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:34.999385  995603 kubeconfig.go:92] found "old-k8s-version-250163" server: "https://192.168.39.10:8443"
	I0830 22:19:35.002017  995603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:35.013903  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.013962  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.028780  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.028800  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.028845  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.043243  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.543986  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.544109  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.555939  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.044164  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.044259  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.055496  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.544110  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.544243  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.555999  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.043535  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.043628  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.055019  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.543435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.543546  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.558778  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.044367  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.044482  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.058777  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.543327  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.543431  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.555133  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.043720  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.043874  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.059955  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.543461  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.543625  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.558707  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.360241  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:39.363755  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:40.357373  995192 pod_ready.go:92] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:40.357396  995192 pod_ready.go:81] duration metric: took 5.023230161s waiting for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:40.357409  995192 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:39.597197  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:39.597650  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:39.597684  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:39.597603  996676 retry.go:31] will retry after 3.562835127s: waiting for machine to come up
	I0830 22:19:43.161572  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:43.162020  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:43.162054  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:43.161973  996676 retry.go:31] will retry after 5.409514109s: waiting for machine to come up
	I0830 22:19:40.044014  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.044104  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.059377  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:40.543910  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.544012  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.555295  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.043380  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.043493  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.055443  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.544046  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.544121  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.555832  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.043785  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.043876  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.054809  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.543376  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.543463  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.554254  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.043435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.043543  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.054734  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.544308  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.544418  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.555603  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.044211  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.044291  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.055403  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.544013  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.544117  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.555197  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.378396  995192 pod_ready.go:102] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:42.881428  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.881456  995192 pod_ready.go:81] duration metric: took 2.524040213s waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.881467  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892688  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.892718  995192 pod_ready.go:81] duration metric: took 11.243576ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892731  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898434  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.898463  995192 pod_ready.go:81] duration metric: took 5.721888ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898476  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904261  995192 pod_ready.go:92] pod "kube-proxy-vckmf" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.904287  995192 pod_ready.go:81] duration metric: took 5.803127ms waiting for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904299  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153736  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:43.153763  995192 pod_ready.go:81] duration metric: took 249.454932ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153777  995192 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:45.462667  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:48.575718  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576172  994624 main.go:141] libmachine: (no-preload-698195) Found IP for machine: 192.168.72.28
	I0830 22:19:48.576206  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has current primary IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576217  994624 main.go:141] libmachine: (no-preload-698195) Reserving static IP address...
	I0830 22:19:48.576671  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.576719  994624 main.go:141] libmachine: (no-preload-698195) Reserved static IP address: 192.168.72.28
	I0830 22:19:48.576754  994624 main.go:141] libmachine: (no-preload-698195) DBG | skip adding static IP to network mk-no-preload-698195 - found existing host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"}
	I0830 22:19:48.576776  994624 main.go:141] libmachine: (no-preload-698195) DBG | Getting to WaitForSSH function...
	I0830 22:19:48.576792  994624 main.go:141] libmachine: (no-preload-698195) Waiting for SSH to be available...
	I0830 22:19:48.578953  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579261  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.579290  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579398  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH client type: external
	I0830 22:19:48.579417  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa (-rw-------)
	I0830 22:19:48.579451  994624 main.go:141] libmachine: (no-preload-698195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:48.579478  994624 main.go:141] libmachine: (no-preload-698195) DBG | About to run SSH command:
	I0830 22:19:48.579493  994624 main.go:141] libmachine: (no-preload-698195) DBG | exit 0
	I0830 22:19:48.679834  994624 main.go:141] libmachine: (no-preload-698195) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:48.680237  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetConfigRaw
	I0830 22:19:48.681064  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.683388  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.683844  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.683884  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.684153  994624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/config.json ...
	I0830 22:19:48.684435  994624 machine.go:88] provisioning docker machine ...
	I0830 22:19:48.684462  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:48.684708  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.684851  994624 buildroot.go:166] provisioning hostname "no-preload-698195"
	I0830 22:19:48.684883  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.685013  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.687508  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.687975  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.688018  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.688198  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.688413  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688599  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688830  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.689061  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.689695  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.689718  994624 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-698195 && echo "no-preload-698195" | sudo tee /etc/hostname
	I0830 22:19:45.014985  995603 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:45.015030  995603 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:45.015045  995603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:45.015102  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:45.049952  995603 cri.go:89] found id: ""
	I0830 22:19:45.050039  995603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:45.065202  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:45.074198  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:45.074330  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083407  995603 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083438  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:45.211527  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.256339  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.044735651s)
	I0830 22:19:46.256389  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.469714  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.542945  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.644533  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:46.644632  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:46.659432  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.182415  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.682613  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.182661  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.206336  995603 api_server.go:72] duration metric: took 1.561801361s to wait for apiserver process to appear ...
	I0830 22:19:48.206374  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:48.206399  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:50.136893  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 1m0.108561967s
	I0830 22:19:50.136941  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:50.136952  994705 fix.go:54] fixHost starting: 
	I0830 22:19:50.137347  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:50.137386  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:50.156678  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0830 22:19:50.157148  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:50.157739  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:19:50.157765  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:50.158103  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:50.158283  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.158445  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:19:50.160098  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Running err=<nil>
	W0830 22:19:50.160115  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:50.162162  994705 out.go:177] * Updating the running kvm2 "embed-certs-208903" VM ...
	I0830 22:19:50.163634  994705 machine.go:88] provisioning docker machine ...
	I0830 22:19:50.163663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.163906  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164077  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:19:50.164104  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164288  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.166831  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167198  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.167234  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167371  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.167561  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167731  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167902  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.168108  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.168592  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.168610  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:19:50.306738  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:19:50.306772  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.309523  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.309929  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.309974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.310182  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.310349  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310638  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310845  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.311027  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.311610  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.311644  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:50.433972  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:50.434005  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:50.434045  994705 buildroot.go:174] setting up certificates
	I0830 22:19:50.434057  994705 provision.go:83] configureAuth start
	I0830 22:19:50.434069  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.434388  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:19:50.437450  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.437883  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.437916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.438115  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.440654  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.441059  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441213  994705 provision.go:138] copyHostCerts
	I0830 22:19:50.441271  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:50.441283  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:50.441352  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:50.441453  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:50.441462  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:50.441481  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:50.441563  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:50.441575  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:50.441606  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:50.441684  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:19:50.721978  994705 provision.go:172] copyRemoteCerts
	I0830 22:19:50.722039  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:50.722072  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.724893  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725257  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.725289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725571  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.725799  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.726014  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.726181  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:19:50.817217  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:50.843335  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:50.869233  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:50.897508  994705 provision.go:86] duration metric: configureAuth took 463.432948ms
	I0830 22:19:50.897544  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:50.897804  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:50.897904  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.900633  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.901040  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901210  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.901404  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901547  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901680  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.901875  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.902287  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.902310  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:51.128816  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.128855  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:19:51.128866  994705 machine.go:91] provisioned docker machine in 965.212906ms
	I0830 22:19:51.128900  994705 fix.go:56] fixHost completed within 991.948899ms
	I0830 22:19:51.128906  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 991.990648ms
	W0830 22:19:51.129050  994705 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p embed-certs-208903" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.131823  994705 out.go:177] 
	W0830 22:19:51.133957  994705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0830 22:19:51.133985  994705 out.go:239] * 
	W0830 22:19:51.134788  994705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:19:51.136212  994705 out.go:177] 
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:18:37 UTC, ends at Wed 2023-08-30 22:20:40 UTC. --
	Aug 30 22:18:39 minikube systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 30 22:18:39 minikube systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 30 22:18:44 embed-certs-208903 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 30 22:18:44 embed-certs-208903 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 30 22:19:51 embed-certs-208903 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 30 22:19:51 embed-certs-208903 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	
	* 
	* ==> container status <==
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug30 22:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072921] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.305428] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.387854] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153721] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.490379] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	
	* 
	* ==> kernel <==
	*  22:20:46 up 2 min,  0 users,  load average: 0.02, 0.02, 0.00
	Linux embed-certs-208903 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:18:37 UTC, ends at Wed 2023-08-30 22:20:46 UTC. --
	-- No entries --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:19:57.789552  996952 logs.go:281] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:19:51Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:19:53Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:19:55Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:19:57Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:20:03.827311  996952 logs.go:281] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:19:57Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:19:59Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:01Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:03Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:20:09.861226  996952 logs.go:281] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:20:03Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:20:05Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:07Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:09Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:20:15.894742  996952 logs.go:281] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:20:09Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:20:11Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:13Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:15Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:20:21.924689  996952 logs.go:281] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:20:15Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:20:17Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:19Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:21Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:20:27.960587  996952 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:20:21Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:20:23Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:25Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:27Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:20:33.990227  996952 logs.go:281] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:20:27Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:20:29Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:31Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:33Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:20:40.023435  996952 logs.go:281] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:20:34Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:20:36Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:38Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:40Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:20:46.106324  996952 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:20:40Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:20:42Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:44Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:20:46Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2023-08-30T22:20:40Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\ntime=\"2023-08-30T22:20:42Z\" level=error msg=\"connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded\"\ntime=\"2023-08-30T22:20:44Z\" level=error msg=\"connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded\"\ntime=\"2023-08-30T22:20:46Z\" level=fatal msg=\"connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /st
derr **"
	E0830 22:20:46.254133  996952 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0830 22:20:46.240809     559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:20:46.241367     559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:20:46.243285     559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:20:46.244690     559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:20:46.246055     559 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nE0830 22:20:46.240809     559 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:20:46.241367     559 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:20:46.243285     559 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:20:46.244690     559 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:20:46.246055     559 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nThe connection to the s
erver localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208903 -n embed-certs-208903
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208903 -n embed-certs-208903: exit status 2 (254.291507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "embed-certs-208903" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (410.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-250163 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-250163 --alsologtostderr -v=3: exit status 82 (2m0.825172875s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-250163"  ...
	* Stopping node "old-k8s-version-250163"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:14:27.863102  994931 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:14:27.863229  994931 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:14:27.863238  994931 out.go:309] Setting ErrFile to fd 2...
	I0830 22:14:27.863242  994931 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:14:27.863475  994931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:14:27.863693  994931 out.go:303] Setting JSON to false
	I0830 22:14:27.863817  994931 mustload.go:65] Loading cluster: old-k8s-version-250163
	I0830 22:14:27.864175  994931 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:14:27.864269  994931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:14:27.864427  994931 mustload.go:65] Loading cluster: old-k8s-version-250163
	I0830 22:14:27.864531  994931 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:14:27.864555  994931 stop.go:39] StopHost: old-k8s-version-250163
	I0830 22:14:27.864913  994931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:14:27.864967  994931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:14:27.879577  994931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33015
	I0830 22:14:27.880022  994931 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:14:27.880630  994931 main.go:141] libmachine: Using API Version  1
	I0830 22:14:27.880651  994931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:14:27.881029  994931 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:14:27.883753  994931 out.go:177] * Stopping node "old-k8s-version-250163"  ...
	I0830 22:14:27.885219  994931 main.go:141] libmachine: Stopping "old-k8s-version-250163"...
	I0830 22:14:27.885239  994931 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:14:27.886976  994931 main.go:141] libmachine: (old-k8s-version-250163) Calling .Stop
	I0830 22:14:27.890084  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 0/60
	I0830 22:14:28.891317  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 1/60
	I0830 22:14:29.893036  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 2/60
	I0830 22:14:30.894479  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 3/60
	I0830 22:14:31.895894  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 4/60
	I0830 22:14:32.898020  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 5/60
	I0830 22:14:33.899329  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 6/60
	I0830 22:14:34.900734  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 7/60
	I0830 22:14:35.902052  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 8/60
	I0830 22:14:36.903423  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 9/60
	I0830 22:14:37.904996  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 10/60
	I0830 22:14:38.906261  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 11/60
	I0830 22:14:39.907558  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 12/60
	I0830 22:14:40.908827  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 13/60
	I0830 22:14:41.910272  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 14/60
	I0830 22:14:42.912251  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 15/60
	I0830 22:14:43.913513  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 16/60
	I0830 22:14:44.915051  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 17/60
	I0830 22:14:45.916406  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 18/60
	I0830 22:14:46.917805  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 19/60
	I0830 22:14:47.920067  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 20/60
	I0830 22:14:48.921463  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 21/60
	I0830 22:14:49.922881  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 22/60
	I0830 22:14:50.924276  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 23/60
	I0830 22:14:51.925684  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 24/60
	I0830 22:14:52.927577  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 25/60
	I0830 22:14:53.928964  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 26/60
	I0830 22:14:54.930390  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 27/60
	I0830 22:14:55.931672  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 28/60
	I0830 22:14:56.933017  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 29/60
	I0830 22:14:57.935060  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 30/60
	I0830 22:14:58.936594  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 31/60
	I0830 22:14:59.938065  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 32/60
	I0830 22:15:00.939447  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 33/60
	I0830 22:15:01.940873  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 34/60
	I0830 22:15:02.942778  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 35/60
	I0830 22:15:03.944145  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 36/60
	I0830 22:15:04.945437  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 37/60
	I0830 22:15:05.946744  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 38/60
	I0830 22:15:06.948047  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 39/60
	I0830 22:15:07.950190  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 40/60
	I0830 22:15:08.951993  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 41/60
	I0830 22:15:09.953370  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 42/60
	I0830 22:15:10.954791  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 43/60
	I0830 22:15:11.956082  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 44/60
	I0830 22:15:12.957882  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 45/60
	I0830 22:15:13.959826  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 46/60
	I0830 22:15:14.961103  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 47/60
	I0830 22:15:15.962689  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 48/60
	I0830 22:15:16.964000  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 49/60
	I0830 22:15:17.965973  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 50/60
	I0830 22:15:18.967261  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 51/60
	I0830 22:15:19.968611  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 52/60
	I0830 22:15:20.970163  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 53/60
	I0830 22:15:21.971537  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 54/60
	I0830 22:15:22.973371  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 55/60
	I0830 22:15:23.974704  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 56/60
	I0830 22:15:24.976043  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 57/60
	I0830 22:15:25.977549  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 58/60
	I0830 22:15:26.978835  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 59/60
	I0830 22:15:27.980040  994931 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0830 22:15:27.980106  994931 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:15:27.980128  994931 retry.go:31] will retry after 528.492785ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:15:28.508778  994931 stop.go:39] StopHost: old-k8s-version-250163
	I0830 22:15:28.509294  994931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:15:28.509351  994931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:15:28.524304  994931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0830 22:15:28.524810  994931 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:15:28.525319  994931 main.go:141] libmachine: Using API Version  1
	I0830 22:15:28.525335  994931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:15:28.525748  994931 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:15:28.528024  994931 out.go:177] * Stopping node "old-k8s-version-250163"  ...
	I0830 22:15:28.529443  994931 main.go:141] libmachine: Stopping "old-k8s-version-250163"...
	I0830 22:15:28.529458  994931 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:15:28.531059  994931 main.go:141] libmachine: (old-k8s-version-250163) Calling .Stop
	I0830 22:15:28.534235  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 0/60
	I0830 22:15:29.535709  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 1/60
	I0830 22:15:30.537063  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 2/60
	I0830 22:15:31.538374  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 3/60
	I0830 22:15:32.539724  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 4/60
	I0830 22:15:33.541351  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 5/60
	I0830 22:15:34.542802  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 6/60
	I0830 22:15:35.544084  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 7/60
	I0830 22:15:36.545661  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 8/60
	I0830 22:15:37.547041  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 9/60
	I0830 22:15:38.548775  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 10/60
	I0830 22:15:39.550109  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 11/60
	I0830 22:15:40.551638  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 12/60
	I0830 22:15:41.553146  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 13/60
	I0830 22:15:42.554345  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 14/60
	I0830 22:15:43.556087  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 15/60
	I0830 22:15:44.557553  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 16/60
	I0830 22:15:45.558984  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 17/60
	I0830 22:15:46.560286  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 18/60
	I0830 22:15:47.561631  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 19/60
	I0830 22:15:48.563218  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 20/60
	I0830 22:15:49.564822  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 21/60
	I0830 22:15:50.566144  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 22/60
	I0830 22:15:51.567505  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 23/60
	I0830 22:15:52.568932  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 24/60
	I0830 22:15:53.570418  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 25/60
	I0830 22:15:54.571782  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 26/60
	I0830 22:15:55.573174  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 27/60
	I0830 22:15:56.574512  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 28/60
	I0830 22:15:57.575834  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 29/60
	I0830 22:15:58.577167  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 30/60
	I0830 22:15:59.578607  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 31/60
	I0830 22:16:00.580092  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 32/60
	I0830 22:16:01.581479  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 33/60
	I0830 22:16:02.582934  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 34/60
	I0830 22:16:03.584442  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 35/60
	I0830 22:16:04.585724  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 36/60
	I0830 22:16:05.587225  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 37/60
	I0830 22:16:06.588598  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 38/60
	I0830 22:16:07.590052  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 39/60
	I0830 22:16:08.591651  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 40/60
	I0830 22:16:09.592997  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 41/60
	I0830 22:16:10.594424  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 42/60
	I0830 22:16:11.595696  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 43/60
	I0830 22:16:12.597091  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 44/60
	I0830 22:16:13.599418  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 45/60
	I0830 22:16:14.600765  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 46/60
	I0830 22:16:15.602249  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 47/60
	I0830 22:16:16.603655  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 48/60
	I0830 22:16:17.605073  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 49/60
	I0830 22:16:18.607248  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 50/60
	I0830 22:16:19.608666  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 51/60
	I0830 22:16:20.610257  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 52/60
	I0830 22:16:21.611573  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 53/60
	I0830 22:16:22.612991  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 54/60
	I0830 22:16:23.614529  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 55/60
	I0830 22:16:24.615972  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 56/60
	I0830 22:16:25.617434  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 57/60
	I0830 22:16:26.618677  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 58/60
	I0830 22:16:27.620196  994931 main.go:141] libmachine: (old-k8s-version-250163) Waiting for machine to stop 59/60
	I0830 22:16:28.620948  994931 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0830 22:16:28.620997  994931 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0830 22:16:28.623203  994931 out.go:177] 
	W0830 22:16:28.624741  994931 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0830 22:16:28.624759  994931 out.go:239] * 
	* 
	W0830 22:16:28.628280  994931 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:16:28.629771  994931 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-250163 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-250163 -n old-k8s-version-250163
E0830 22:16:32.761852  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-250163 -n old-k8s-version-250163: exit status 3 (18.652777018s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:16:47.284183  995431 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host
	E0830 22:16:47.284204  995431 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-250163" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007: exit status 3 (3.199763734s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:15:02.708177  995092 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.104:22: connect: no route to host
	E0830 22:15:02.708206  995092 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.104:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-791007 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-791007 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154555889s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.104:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-791007 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007: exit status 3 (3.061281768s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:15:11.924218  995162 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.104:22: connect: no route to host
	E0830 22:15:11.924236  995162 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.104:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-791007" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-250163 -n old-k8s-version-250163
E0830 22:16:49.715798  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-250163 -n old-k8s-version-250163: exit status 3 (3.199501825s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:16:50.484108  995493 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host
	E0830 22:16:50.484132  995493 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-250163 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-250163 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154484044s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-250163 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-250163 -n old-k8s-version-250163
E0830 22:16:57.076248  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-250163 -n old-k8s-version-250163: exit status 3 (3.061693181s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:16:59.700158  995562 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host
	E0830 22:16:59.700178  995562 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.10:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-250163" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (596.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:21:49.715106  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:21:57.076286  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:24:22.734192  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:26:49.715408  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:26:57.076865  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:28:20.126007  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:29:22.735068  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208903 -n embed-certs-208903
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208903 -n embed-certs-208903: exit status 2 (260.063072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "embed-certs-208903" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903: exit status 2 (245.358513ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-208903 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-208903 logs -n 25: (54.997876679s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-519738 -- sudo                         | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-519738                                 | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-184733                              | stopped-upgrade-184733       | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:09 UTC |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	| delete  | -p                                                     | disable-driver-mounts-883991 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | disable-driver-mounts-883991                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:12 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-698195             | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208903            | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-791007  | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC | 30 Aug 23 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC |                     |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-698195                  | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208903                 | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC | 30 Aug 23 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-250163        | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC | 30 Aug 23 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-791007       | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC | 30 Aug 23 22:24 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-250163             | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC | 30 Aug 23 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:16:59
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:16:59.758341  995603 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:16:59.758470  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758479  995603 out.go:309] Setting ErrFile to fd 2...
	I0830 22:16:59.758484  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758692  995603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:16:59.759241  995603 out.go:303] Setting JSON to false
	I0830 22:16:59.760232  995603 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14367,"bootTime":1693419453,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:16:59.760291  995603 start.go:138] virtualization: kvm guest
	I0830 22:16:59.762744  995603 out.go:177] * [old-k8s-version-250163] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:16:59.764395  995603 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:16:59.765863  995603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:16:59.764404  995603 notify.go:220] Checking for updates...
	I0830 22:16:59.767579  995603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:16:59.769244  995603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:16:59.771001  995603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:16:59.772625  995603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:16:59.774574  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:16:59.774929  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.775032  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.790271  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0830 22:16:59.790677  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.791257  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.791283  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.791645  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.791879  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.793885  995603 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:16:59.795414  995603 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:16:59.795716  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.795752  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.810316  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0830 22:16:59.810694  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.811176  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.811201  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.811560  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.811808  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.845962  995603 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:16:59.847399  995603 start.go:298] selected driver: kvm2
	I0830 22:16:59.847410  995603 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.847546  995603 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:16:59.848301  995603 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.848376  995603 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:16:59.862654  995603 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:16:59.863040  995603 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 22:16:59.863080  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:16:59.863094  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:16:59.863109  995603 start_flags.go:319] config:
	{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.863341  995603 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.865500  995603 out.go:177] * Starting control plane node old-k8s-version-250163 in cluster old-k8s-version-250163
	I0830 22:17:00.916070  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:16:59.866763  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:16:59.866836  995603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0830 22:16:59.866852  995603 cache.go:57] Caching tarball of preloaded images
	I0830 22:16:59.866935  995603 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:16:59.866946  995603 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0830 22:16:59.867091  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:16:59.867314  995603 start.go:365] acquiring machines lock for old-k8s-version-250163: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:17:06.996025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:10.068036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:16.148043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:19.220024  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:25.300036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:28.372088  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:34.452043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:37.524037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:43.604037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:46.676107  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:52.756100  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:55.828195  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:01.908025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:04.980079  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:11.060035  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:14.132025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:20.212050  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:23.283995  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:26.288205  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 4m29.4670209s
	I0830 22:18:26.288261  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:26.288276  994705 fix.go:54] fixHost starting: 
	I0830 22:18:26.288621  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:26.288656  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:26.304048  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0830 22:18:26.304613  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:26.305138  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:18:26.305164  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:26.305518  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:26.305719  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:26.305843  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:18:26.307597  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Stopped err=<nil>
	I0830 22:18:26.307639  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	W0830 22:18:26.307827  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:26.309985  994705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208903" ...
	I0830 22:18:26.311551  994705 main.go:141] libmachine: (embed-certs-208903) Calling .Start
	I0830 22:18:26.311750  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring networks are active...
	I0830 22:18:26.312528  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network default is active
	I0830 22:18:26.312814  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network mk-embed-certs-208903 is active
	I0830 22:18:26.313153  994705 main.go:141] libmachine: (embed-certs-208903) Getting domain xml...
	I0830 22:18:26.313857  994705 main.go:141] libmachine: (embed-certs-208903) Creating domain...
	I0830 22:18:26.285881  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:26.285939  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:18:26.288013  994624 machine.go:91] provisioned docker machine in 4m37.410947228s
	I0830 22:18:26.288063  994624 fix.go:56] fixHost completed within 4m37.432260867s
	I0830 22:18:26.288085  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 4m37.432330775s
	W0830 22:18:26.288107  994624 start.go:672] error starting host: provision: host is not running
	W0830 22:18:26.288219  994624 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0830 22:18:26.288225  994624 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:27.529120  994705 main.go:141] libmachine: (embed-certs-208903) Waiting to get IP...
	I0830 22:18:27.530028  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.530390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.530515  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.530404  996319 retry.go:31] will retry after 311.351139ms: waiting for machine to come up
	I0830 22:18:27.843013  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.843398  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.843427  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.843337  996319 retry.go:31] will retry after 367.953943ms: waiting for machine to come up
	I0830 22:18:28.213214  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.213785  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.213820  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.213722  996319 retry.go:31] will retry after 424.275825ms: waiting for machine to come up
	I0830 22:18:28.639216  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.639670  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.639707  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.639609  996319 retry.go:31] will retry after 502.321201ms: waiting for machine to come up
	I0830 22:18:29.143240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.143823  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.143850  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.143790  996319 retry.go:31] will retry after 680.495047ms: waiting for machine to come up
	I0830 22:18:29.825462  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.825879  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.825904  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.825836  996319 retry.go:31] will retry after 756.63617ms: waiting for machine to come up
	I0830 22:18:30.583723  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:30.584179  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:30.584212  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:30.584118  996319 retry.go:31] will retry after 851.722792ms: waiting for machine to come up
	I0830 22:18:31.437603  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:31.438031  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:31.438063  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:31.437986  996319 retry.go:31] will retry after 1.214893807s: waiting for machine to come up
	I0830 22:18:31.289961  994624 start.go:365] acquiring machines lock for no-preload-698195: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:32.654351  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:32.654803  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:32.654829  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:32.654756  996319 retry.go:31] will retry after 1.574180335s: waiting for machine to come up
	I0830 22:18:34.231491  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:34.231911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:34.231944  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:34.231826  996319 retry.go:31] will retry after 1.99107048s: waiting for machine to come up
	I0830 22:18:36.225911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:36.226336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:36.226363  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:36.226283  996319 retry.go:31] will retry after 1.816508761s: waiting for machine to come up
	I0830 22:18:38.044672  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:38.045061  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:38.045094  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:38.045021  996319 retry.go:31] will retry after 2.343148299s: waiting for machine to come up
	I0830 22:18:40.389346  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:40.389753  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:40.389778  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:40.389700  996319 retry.go:31] will retry after 3.682098761s: waiting for machine to come up
	I0830 22:18:45.025750  995192 start.go:369] acquired machines lock for "default-k8s-diff-port-791007" in 3m32.939054887s
	I0830 22:18:45.025823  995192 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:45.025847  995192 fix.go:54] fixHost starting: 
	I0830 22:18:45.026291  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:45.026333  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:45.041161  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0830 22:18:45.041657  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:45.042176  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:18:45.042208  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:45.042544  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:45.042748  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:18:45.042910  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:18:45.044428  995192 fix.go:102] recreateIfNeeded on default-k8s-diff-port-791007: state=Stopped err=<nil>
	I0830 22:18:45.044454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	W0830 22:18:45.044615  995192 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:45.046538  995192 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-791007" ...
	I0830 22:18:44.074916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075386  994705 main.go:141] libmachine: (embed-certs-208903) Found IP for machine: 192.168.50.159
	I0830 22:18:44.075411  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has current primary IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075418  994705 main.go:141] libmachine: (embed-certs-208903) Reserving static IP address...
	I0830 22:18:44.075899  994705 main.go:141] libmachine: (embed-certs-208903) Reserved static IP address: 192.168.50.159
	I0830 22:18:44.075928  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.075939  994705 main.go:141] libmachine: (embed-certs-208903) Waiting for SSH to be available...
	I0830 22:18:44.075959  994705 main.go:141] libmachine: (embed-certs-208903) DBG | skip adding static IP to network mk-embed-certs-208903 - found existing host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"}
	I0830 22:18:44.075968  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Getting to WaitForSSH function...
	I0830 22:18:44.078068  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.078436  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH client type: external
	I0830 22:18:44.078533  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa (-rw-------)
	I0830 22:18:44.078569  994705 main.go:141] libmachine: (embed-certs-208903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:18:44.078590  994705 main.go:141] libmachine: (embed-certs-208903) DBG | About to run SSH command:
	I0830 22:18:44.078622  994705 main.go:141] libmachine: (embed-certs-208903) DBG | exit 0
	I0830 22:18:44.167514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | SSH cmd err, output: <nil>: 
	I0830 22:18:44.167898  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetConfigRaw
	I0830 22:18:44.168594  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.170974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.171370  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171696  994705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/embed-certs-208903/config.json ...
	I0830 22:18:44.171967  994705 machine.go:88] provisioning docker machine ...
	I0830 22:18:44.171989  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:44.172184  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172371  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:18:44.172397  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172563  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.174522  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174861  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.174894  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174988  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.175159  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175286  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175413  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.175627  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.176111  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.176132  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:18:44.309192  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:18:44.309225  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.311931  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312327  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.312362  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312512  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.312727  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.312919  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.313048  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.313215  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.313623  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.313638  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:18:44.440529  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:44.440594  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:18:44.440641  994705 buildroot.go:174] setting up certificates
	I0830 22:18:44.440653  994705 provision.go:83] configureAuth start
	I0830 22:18:44.440663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.440943  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.443289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443663  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.443705  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443805  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.445987  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446297  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.446328  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446462  994705 provision.go:138] copyHostCerts
	I0830 22:18:44.446524  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:18:44.446550  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:18:44.446638  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:18:44.446750  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:18:44.446763  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:18:44.446800  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:18:44.446907  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:18:44.446919  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:18:44.446955  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:18:44.447036  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:18:44.664313  994705 provision.go:172] copyRemoteCerts
	I0830 22:18:44.664387  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:18:44.664434  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.666819  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667160  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.667192  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667338  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.667565  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.667687  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.667839  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:18:44.756922  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:18:44.780430  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:18:44.803396  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:18:44.825975  994705 provision.go:86] duration metric: configureAuth took 385.307932ms
	I0830 22:18:44.826006  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:18:44.826230  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:18:44.826334  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.828862  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829199  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.829240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829383  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.829606  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829770  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829907  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.830104  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.830593  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.830615  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:18:45.025539  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025585  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:18:45.025596  994705 machine.go:91] provisioned docker machine in 853.613637ms
	I0830 22:18:45.025627  994705 fix.go:56] fixHost completed within 18.737351046s
	I0830 22:18:45.025637  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 18.737393499s
	W0830 22:18:45.025662  994705 start.go:672] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	W0830 22:18:45.025746  994705 out.go:239] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025760  994705 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:45.047821  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Start
	I0830 22:18:45.047982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring networks are active...
	I0830 22:18:45.048684  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network default is active
	I0830 22:18:45.049040  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network mk-default-k8s-diff-port-791007 is active
	I0830 22:18:45.049401  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Getting domain xml...
	I0830 22:18:45.050009  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Creating domain...
	I0830 22:18:46.288943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting to get IP...
	I0830 22:18:46.289982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290359  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.290388  996430 retry.go:31] will retry after 228.105709ms: waiting for machine to come up
	I0830 22:18:46.519862  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520369  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.520342  996430 retry.go:31] will retry after 343.008473ms: waiting for machine to come up
	I0830 22:18:46.865023  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865426  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.865385  996430 retry.go:31] will retry after 467.017605ms: waiting for machine to come up
	I0830 22:18:50.028247  994705 start.go:365] acquiring machines lock for embed-certs-208903: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:47.334027  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334655  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.334600  996430 retry.go:31] will retry after 601.952764ms: waiting for machine to come up
	I0830 22:18:47.937980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.938387  996430 retry.go:31] will retry after 556.18277ms: waiting for machine to come up
	I0830 22:18:48.495747  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496130  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:48.496101  996430 retry.go:31] will retry after 696.126701ms: waiting for machine to come up
	I0830 22:18:49.193405  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193789  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193822  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:49.193752  996430 retry.go:31] will retry after 1.123021492s: waiting for machine to come up
	I0830 22:18:50.318326  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318710  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:50.318637  996430 retry.go:31] will retry after 1.198520166s: waiting for machine to come up
	I0830 22:18:51.518959  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519302  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519332  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:51.519244  996430 retry.go:31] will retry after 1.851352392s: waiting for machine to come up
	I0830 22:18:53.373208  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373676  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373713  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:53.373594  996430 retry.go:31] will retry after 1.789163964s: waiting for machine to come up
	I0830 22:18:55.164132  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164664  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:55.164587  996430 retry.go:31] will retry after 2.037803279s: waiting for machine to come up
	I0830 22:18:57.204503  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204957  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204984  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:57.204919  996430 retry.go:31] will retry after 3.365492251s: waiting for machine to come up
	I0830 22:19:00.572195  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:19:00.572533  996430 retry.go:31] will retry after 3.57478782s: waiting for machine to come up
	I0830 22:19:05.536665  995603 start.go:369] acquired machines lock for "old-k8s-version-250163" in 2m5.669275373s
	I0830 22:19:05.536730  995603 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:05.536751  995603 fix.go:54] fixHost starting: 
	I0830 22:19:05.537197  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:05.537240  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:05.556581  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0830 22:19:05.557016  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:05.557559  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:19:05.557590  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:05.557937  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:05.558124  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:05.558290  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:19:05.559829  995603 fix.go:102] recreateIfNeeded on old-k8s-version-250163: state=Stopped err=<nil>
	I0830 22:19:05.559871  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	W0830 22:19:05.560056  995603 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:05.562726  995603 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-250163" ...
	I0830 22:19:04.151280  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.151787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Found IP for machine: 192.168.61.104
	I0830 22:19:04.151820  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserving static IP address...
	I0830 22:19:04.151839  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has current primary IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.152254  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.152286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserved static IP address: 192.168.61.104
	I0830 22:19:04.152306  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | skip adding static IP to network mk-default-k8s-diff-port-791007 - found existing host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"}
	I0830 22:19:04.152324  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for SSH to be available...
	I0830 22:19:04.152339  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Getting to WaitForSSH function...
	I0830 22:19:04.154335  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.154701  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154791  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH client type: external
	I0830 22:19:04.154833  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa (-rw-------)
	I0830 22:19:04.154852  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:04.154868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | About to run SSH command:
	I0830 22:19:04.154879  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | exit 0
	I0830 22:19:04.251692  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:04.252182  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetConfigRaw
	I0830 22:19:04.252842  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.255184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255536  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.255571  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255850  995192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/config.json ...
	I0830 22:19:04.256118  995192 machine.go:88] provisioning docker machine ...
	I0830 22:19:04.256143  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:04.256344  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256504  995192 buildroot.go:166] provisioning hostname "default-k8s-diff-port-791007"
	I0830 22:19:04.256525  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256653  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.259010  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259366  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.259389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259509  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.259667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.260115  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.260787  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.260810  995192 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-791007 && echo "default-k8s-diff-port-791007" | sudo tee /etc/hostname
	I0830 22:19:04.403123  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-791007
	
	I0830 22:19:04.403166  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.405835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406219  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.406270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406476  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.406704  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.406892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.407047  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.407233  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.407634  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.407658  995192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-791007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-791007/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-791007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:04.549964  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:04.550002  995192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:04.550039  995192 buildroot.go:174] setting up certificates
	I0830 22:19:04.550053  995192 provision.go:83] configureAuth start
	I0830 22:19:04.550071  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.550422  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.552844  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553116  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.553150  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553313  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.555514  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.555880  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.555917  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.556036  995192 provision.go:138] copyHostCerts
	I0830 22:19:04.556100  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:04.556133  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:04.556213  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:04.556343  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:04.556354  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:04.556392  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:04.556485  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:04.556496  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:04.556528  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:04.556607  995192 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-791007 san=[192.168.61.104 192.168.61.104 localhost 127.0.0.1 minikube default-k8s-diff-port-791007]
	I0830 22:19:04.756354  995192 provision.go:172] copyRemoteCerts
	I0830 22:19:04.756413  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:04.756438  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.759134  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759511  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.759544  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759739  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.759977  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.760153  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.760297  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:04.858949  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:04.882455  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0830 22:19:04.905659  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:04.929876  995192 provision.go:86] duration metric: configureAuth took 379.794026ms
	I0830 22:19:04.929905  995192 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:04.930124  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:04.930228  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.932799  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933159  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.933192  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933316  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.933531  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933703  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.934015  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.934606  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.934633  995192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:05.266317  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:05.266349  995192 machine.go:91] provisioned docker machine in 1.010213866s
	I0830 22:19:05.266363  995192 start.go:300] post-start starting for "default-k8s-diff-port-791007" (driver="kvm2")
	I0830 22:19:05.266378  995192 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:05.266402  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.266764  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:05.266802  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.269938  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270300  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.270345  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270472  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.270650  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.270800  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.270922  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.365334  995192 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:05.369583  995192 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:05.369608  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:05.369701  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:05.369790  995192 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:05.369879  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:05.377933  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:05.401027  995192 start.go:303] post-start completed in 134.648062ms
	I0830 22:19:05.401051  995192 fix.go:56] fixHost completed within 20.37520461s
	I0830 22:19:05.401079  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.404156  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.404629  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404765  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.404960  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405138  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405260  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.405463  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:05.405917  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:05.405930  995192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:05.536449  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433945.485000324
	
	I0830 22:19:05.536479  995192 fix.go:206] guest clock: 1693433945.485000324
	I0830 22:19:05.536490  995192 fix.go:219] Guest: 2023-08-30 22:19:05.485000324 +0000 UTC Remote: 2023-08-30 22:19:05.401056033 +0000 UTC m=+233.468479321 (delta=83.944291ms)
	I0830 22:19:05.536524  995192 fix.go:190] guest clock delta is within tolerance: 83.944291ms
	I0830 22:19:05.536535  995192 start.go:83] releasing machines lock for "default-k8s-diff-port-791007", held for 20.510742441s
	I0830 22:19:05.536569  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.536868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:05.539651  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540017  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.540057  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540196  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540737  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540911  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540975  995192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:05.541036  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.541133  995192 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:05.541172  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.543846  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.543892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544250  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544317  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544411  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544540  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544627  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544707  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544792  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544865  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544926  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.544972  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.677442  995192 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:05.683243  995192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:05.832776  995192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:05.838924  995192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:05.839000  995192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:05.857231  995192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:05.857251  995192 start.go:466] detecting cgroup driver to use...
	I0830 22:19:05.857349  995192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:05.875107  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:05.888540  995192 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:05.888603  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:05.901129  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:05.914011  995192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:06.015763  995192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:06.144950  995192 docker.go:212] disabling docker service ...
	I0830 22:19:06.145052  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:06.159373  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:06.172560  995192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:06.279514  995192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:06.413719  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:06.427047  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:06.443765  995192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:06.443853  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.452621  995192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:06.452690  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.461365  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.470052  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.478685  995192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:06.487763  995192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:06.495483  995192 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:06.495551  995192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:06.508009  995192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:06.516397  995192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:06.615209  995192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:06.792388  995192 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:06.792466  995192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:06.798170  995192 start.go:534] Will wait 60s for crictl version
	I0830 22:19:06.798231  995192 ssh_runner.go:195] Run: which crictl
	I0830 22:19:06.801828  995192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:06.842351  995192 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:06.842459  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.898609  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.962179  995192 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:06.963711  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:06.966803  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967189  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:06.967225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967412  995192 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:06.972033  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:05.564313  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Start
	I0830 22:19:05.564511  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring networks are active...
	I0830 22:19:05.565235  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network default is active
	I0830 22:19:05.565567  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network mk-old-k8s-version-250163 is active
	I0830 22:19:05.565954  995603 main.go:141] libmachine: (old-k8s-version-250163) Getting domain xml...
	I0830 22:19:05.566644  995603 main.go:141] libmachine: (old-k8s-version-250163) Creating domain...
	I0830 22:19:06.869485  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting to get IP...
	I0830 22:19:06.870595  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:06.871071  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:06.871133  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:06.871046  996542 retry.go:31] will retry after 294.811471ms: waiting for machine to come up
	I0830 22:19:07.167657  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.168126  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.168172  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.168099  996542 retry.go:31] will retry after 376.474639ms: waiting for machine to come up
	I0830 22:19:07.546876  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.547389  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.547419  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.547354  996542 retry.go:31] will retry after 329.757182ms: waiting for machine to come up
	I0830 22:19:07.878995  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.879572  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.879601  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.879529  996542 retry.go:31] will retry after 567.335814ms: waiting for machine to come up
	I0830 22:19:08.448373  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.448996  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.449028  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.448958  996542 retry.go:31] will retry after 510.216093ms: waiting for machine to come up
	I0830 22:19:08.960855  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.961412  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.961451  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.961326  996542 retry.go:31] will retry after 688.575912ms: waiting for machine to come up
	I0830 22:19:09.651966  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:09.652379  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:09.652411  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:09.652346  996542 retry.go:31] will retry after 1.130912238s: waiting for machine to come up
	I0830 22:19:06.984632  995192 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:06.984698  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:07.020200  995192 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:07.020282  995192 ssh_runner.go:195] Run: which lz4
	I0830 22:19:07.024254  995192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:07.028470  995192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:07.028508  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 22:19:08.986852  995192 crio.go:444] Took 1.962647 seconds to copy over tarball
	I0830 22:19:08.986915  995192 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:10.784839  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:10.785424  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:10.785456  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:10.785355  996542 retry.go:31] will retry after 898.98114ms: waiting for machine to come up
	I0830 22:19:11.685890  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:11.686614  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:11.686646  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:11.686558  996542 retry.go:31] will retry after 1.621086004s: waiting for machine to come up
	I0830 22:19:13.310234  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:13.310696  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:13.310721  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:13.310630  996542 retry.go:31] will retry after 1.652651656s: waiting for machine to come up
	I0830 22:19:12.113071  995192 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.126115747s)
	I0830 22:19:12.113107  995192 crio.go:451] Took 3.126230 seconds to extract the tarball
	I0830 22:19:12.113120  995192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:12.156320  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:12.200547  995192 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:19:12.200573  995192 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:19:12.200652  995192 ssh_runner.go:195] Run: crio config
	I0830 22:19:12.273153  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:12.273180  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:12.273205  995192 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:12.273231  995192 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-791007 NodeName:default-k8s-diff-port-791007 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:19:12.273413  995192 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-791007"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:12.273497  995192 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-791007 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0830 22:19:12.273573  995192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:19:12.283536  995192 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:12.283609  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:12.292260  995192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0830 22:19:12.309407  995192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:12.325757  995192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0830 22:19:12.342664  995192 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:12.346459  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:12.358721  995192 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007 for IP: 192.168.61.104
	I0830 22:19:12.358797  995192 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:12.359010  995192 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:12.359066  995192 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:12.359147  995192 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.key
	I0830 22:19:12.359219  995192 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key.a202b4d9
	I0830 22:19:12.359255  995192 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key
	I0830 22:19:12.359363  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:12.359390  995192 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:12.359400  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:12.359424  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:12.359449  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:12.359471  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:12.359507  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:12.360328  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:12.385275  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0830 22:19:12.410697  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:12.434240  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0830 22:19:12.457206  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:12.484695  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:12.507670  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:12.531114  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:12.554501  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:12.579425  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:12.603211  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:12.628506  995192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:12.645536  995192 ssh_runner.go:195] Run: openssl version
	I0830 22:19:12.650882  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:12.660449  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665173  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665239  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.670785  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:12.681196  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:12.690775  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695204  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695262  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.700668  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:12.710205  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:12.719691  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724744  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724803  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.730472  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:12.740194  995192 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:12.744773  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:12.750633  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:12.756228  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:12.762258  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:12.767895  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:12.773716  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:12.779716  995192 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-791007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:12.779849  995192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:12.779895  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:12.808983  995192 cri.go:89] found id: ""
	I0830 22:19:12.809055  995192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:12.818188  995192 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:12.818208  995192 kubeadm.go:636] restartCluster start
	I0830 22:19:12.818258  995192 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:12.829333  995192 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.830440  995192 kubeconfig.go:92] found "default-k8s-diff-port-791007" server: "https://192.168.61.104:8444"
	I0830 22:19:12.833172  995192 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:12.841419  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.841468  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.852072  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.852092  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.852135  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.862195  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.362894  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.362981  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.374932  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.862450  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.862558  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.874629  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.363249  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.363368  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.375071  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.862656  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.862767  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.874077  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.363282  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.363389  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.374762  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.862279  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.862375  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.873942  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.362457  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.362554  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.373922  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.862336  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.862415  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.873540  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.964585  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:14.965020  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:14.965042  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:14.964995  996542 retry.go:31] will retry after 1.89297354s: waiting for machine to come up
	I0830 22:19:16.859309  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:16.859825  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:16.859852  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:16.859777  996542 retry.go:31] will retry after 2.908196896s: waiting for machine to come up
	I0830 22:19:17.363243  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.363347  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.378177  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:17.862706  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.862785  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.877394  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.363052  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.363183  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.377397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.862918  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.862995  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.878397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.362972  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.363052  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.374591  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.863153  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.863233  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.878572  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.362613  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.362703  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.374006  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.862535  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.862634  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.874066  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.362612  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.362721  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.375262  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.863011  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.863113  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.874498  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.771969  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:19.772453  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:19.772482  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:19.772410  996542 retry.go:31] will retry after 3.967899631s: waiting for machine to come up
	I0830 22:19:23.743741  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744344  995603 main.go:141] libmachine: (old-k8s-version-250163) Found IP for machine: 192.168.39.10
	I0830 22:19:23.744371  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserving static IP address...
	I0830 22:19:23.744387  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has current primary IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744827  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.744860  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserved static IP address: 192.168.39.10
	I0830 22:19:23.744877  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | skip adding static IP to network mk-old-k8s-version-250163 - found existing host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"}
	I0830 22:19:23.744904  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Getting to WaitForSSH function...
	I0830 22:19:23.744920  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting for SSH to be available...
	I0830 22:19:23.747285  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747642  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.747676  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747864  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH client type: external
	I0830 22:19:23.747896  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa (-rw-------)
	I0830 22:19:23.747935  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:23.747954  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | About to run SSH command:
	I0830 22:19:23.747971  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | exit 0
	I0830 22:19:23.836434  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:23.837035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetConfigRaw
	I0830 22:19:23.837845  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:23.840648  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.841088  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841433  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:19:23.841663  995603 machine.go:88] provisioning docker machine ...
	I0830 22:19:23.841688  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:23.841895  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842049  995603 buildroot.go:166] provisioning hostname "old-k8s-version-250163"
	I0830 22:19:23.842069  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842291  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.844953  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845376  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.845408  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845678  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.845885  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846186  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.846361  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.846839  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.846861  995603 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-250163 && echo "old-k8s-version-250163" | sudo tee /etc/hostname
	I0830 22:19:23.981507  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-250163
	
	I0830 22:19:23.981556  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.984891  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.985249  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985369  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.985604  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.985811  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.986000  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.986199  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.986603  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.986620  995603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-250163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-250163/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-250163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:24.115894  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:24.115952  995603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:24.115985  995603 buildroot.go:174] setting up certificates
	I0830 22:19:24.115996  995603 provision.go:83] configureAuth start
	I0830 22:19:24.116014  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:24.116342  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:24.118887  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119266  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.119312  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119572  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.122166  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122551  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.122590  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122700  995603 provision.go:138] copyHostCerts
	I0830 22:19:24.122769  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:24.122793  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:24.122868  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:24.122989  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:24.123004  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:24.123035  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:24.123168  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:24.123184  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:24.123217  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:24.123302  995603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-250163 san=[192.168.39.10 192.168.39.10 localhost 127.0.0.1 minikube old-k8s-version-250163]
	I0830 22:19:24.303093  995603 provision.go:172] copyRemoteCerts
	I0830 22:19:24.303156  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:24.303182  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.305900  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306173  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.306199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306352  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.306545  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.306728  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.306873  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.393858  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:24.418791  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:19:24.441090  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:24.462926  995603 provision.go:86] duration metric: configureAuth took 346.913079ms
	I0830 22:19:24.462952  995603 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:24.463136  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:19:24.463224  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.465978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466321  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.466357  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466559  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.466785  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.466934  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.467035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.467173  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.467657  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.467676  995603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:25.058077  994624 start.go:369] acquired machines lock for "no-preload-698195" in 53.768050843s
	I0830 22:19:25.058128  994624 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:25.058141  994624 fix.go:54] fixHost starting: 
	I0830 22:19:25.058564  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:25.058603  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:25.076580  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0830 22:19:25.077082  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:25.077788  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:19:25.077824  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:25.078214  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:25.078418  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:25.078695  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:19:25.080411  994624 fix.go:102] recreateIfNeeded on no-preload-698195: state=Stopped err=<nil>
	I0830 22:19:25.080447  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	W0830 22:19:25.080636  994624 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:25.082566  994624 out.go:177] * Restarting existing kvm2 VM for "no-preload-698195" ...
	I0830 22:19:24.795523  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:24.795562  995603 machine.go:91] provisioned docker machine in 953.87669ms
	I0830 22:19:24.795575  995603 start.go:300] post-start starting for "old-k8s-version-250163" (driver="kvm2")
	I0830 22:19:24.795590  995603 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:24.795616  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:24.795984  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:24.796046  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.799136  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799534  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.799561  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799797  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.799996  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.800210  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.800396  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.890335  995603 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:24.894780  995603 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:24.894807  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:24.894890  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:24.894986  995603 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:24.895110  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:24.907259  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:24.932802  995603 start.go:303] post-start completed in 137.211475ms
	I0830 22:19:24.932829  995603 fix.go:56] fixHost completed within 19.396077949s
	I0830 22:19:24.932858  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.935762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936118  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.936160  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936310  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.936538  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936721  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936918  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.937109  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.937748  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.937767  995603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:25.057876  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433965.004650095
	
	I0830 22:19:25.057911  995603 fix.go:206] guest clock: 1693433965.004650095
	I0830 22:19:25.057924  995603 fix.go:219] Guest: 2023-08-30 22:19:25.004650095 +0000 UTC Remote: 2023-08-30 22:19:24.932833395 +0000 UTC m=+145.224486267 (delta=71.8167ms)
	I0830 22:19:25.057987  995603 fix.go:190] guest clock delta is within tolerance: 71.8167ms
	I0830 22:19:25.057998  995603 start.go:83] releasing machines lock for "old-k8s-version-250163", held for 19.521294969s
	I0830 22:19:25.058036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.058351  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:25.061325  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061749  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.061782  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061965  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062635  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062921  995603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:25.062977  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.063084  995603 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:25.063119  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.065978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066217  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066375  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066428  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066620  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066668  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066784  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066806  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.066953  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.067142  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067206  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067278  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.067389  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.181017  995603 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:25.188428  995603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:25.337310  995603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:25.346144  995603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:25.346231  995603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:25.368931  995603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:25.368966  995603 start.go:466] detecting cgroup driver to use...
	I0830 22:19:25.369048  995603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:25.383524  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:25.399296  995603 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:25.399365  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:25.416387  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:25.430426  995603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:25.552861  995603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:25.699278  995603 docker.go:212] disabling docker service ...
	I0830 22:19:25.699350  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:25.718108  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:25.736420  995603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:25.871165  995603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:25.993674  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:26.009215  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:26.027014  995603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0830 22:19:26.027122  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.038902  995603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:26.038985  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.051908  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.062635  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.073049  995603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:26.086514  995603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:26.098352  995603 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:26.098405  995603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:26.117326  995603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:26.129854  995603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:26.259656  995603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:26.476938  995603 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:26.477034  995603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:26.482773  995603 start.go:534] Will wait 60s for crictl version
	I0830 22:19:26.482841  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:26.486853  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:26.525498  995603 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:26.525595  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.585226  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.641386  995603 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0830 22:19:22.362364  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:22.362448  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:22.373701  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:22.842449  995192 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:22.842531  995192 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:22.842551  995192 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:22.842623  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:22.871557  995192 cri.go:89] found id: ""
	I0830 22:19:22.871624  995192 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:22.886295  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:22.894486  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:22.894549  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902556  995192 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902578  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.017775  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.631493  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.831074  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.923222  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.994499  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:23.994583  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.007515  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.519195  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.019167  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.519068  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.019708  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.519664  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.547751  995192 api_server.go:72] duration metric: took 2.553248139s to wait for apiserver process to appear ...
	I0830 22:19:26.547794  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:26.547816  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:25.084008  994624 main.go:141] libmachine: (no-preload-698195) Calling .Start
	I0830 22:19:25.084189  994624 main.go:141] libmachine: (no-preload-698195) Ensuring networks are active...
	I0830 22:19:25.085011  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network default is active
	I0830 22:19:25.085319  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network mk-no-preload-698195 is active
	I0830 22:19:25.085676  994624 main.go:141] libmachine: (no-preload-698195) Getting domain xml...
	I0830 22:19:25.086427  994624 main.go:141] libmachine: (no-preload-698195) Creating domain...
	I0830 22:19:26.443042  994624 main.go:141] libmachine: (no-preload-698195) Waiting to get IP...
	I0830 22:19:26.444179  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.444691  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.444784  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.444686  996676 retry.go:31] will retry after 208.17912ms: waiting for machine to come up
	I0830 22:19:26.654132  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.654621  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.654651  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.654581  996676 retry.go:31] will retry after 304.249592ms: waiting for machine to come up
	I0830 22:19:26.960205  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.960990  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.961014  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.960912  996676 retry.go:31] will retry after 342.108913ms: waiting for machine to come up
	I0830 22:19:27.304766  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.305661  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.305700  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.305602  996676 retry.go:31] will retry after 500.147687ms: waiting for machine to come up
	I0830 22:19:27.808375  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.808867  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.808884  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.808796  996676 retry.go:31] will retry after 562.543443ms: waiting for machine to come up
	I0830 22:19:28.373420  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:28.373912  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:28.373938  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:28.373863  996676 retry.go:31] will retry after 755.787662ms: waiting for machine to come up
	I0830 22:19:26.642985  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:26.646304  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646712  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:26.646773  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646957  995603 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:26.652439  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:26.667339  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:19:26.667418  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:26.703670  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:26.703750  995603 ssh_runner.go:195] Run: which lz4
	I0830 22:19:26.708087  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:26.712329  995603 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:26.712362  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0830 22:19:28.602303  995603 crio.go:444] Took 1.894253 seconds to copy over tarball
	I0830 22:19:28.602408  995603 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:30.838763  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.838807  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:30.838824  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:30.908950  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.908987  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:31.409372  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.420411  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.420480  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:31.909095  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.916778  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.916813  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:29.130983  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:29.131530  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:29.131565  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:29.131459  996676 retry.go:31] will retry after 951.657872ms: waiting for machine to come up
	I0830 22:19:30.084853  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:30.085280  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:30.085306  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:30.085247  996676 retry.go:31] will retry after 1.469099841s: waiting for machine to come up
	I0830 22:19:31.556432  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:31.556893  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:31.556918  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:31.556809  996676 retry.go:31] will retry after 1.217757948s: waiting for machine to come up
	I0830 22:19:32.775796  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:32.776120  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:32.776152  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:32.776080  996676 retry.go:31] will retry after 2.032727742s: waiting for machine to come up
	I0830 22:19:31.859924  995603 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.257478408s)
	I0830 22:19:31.859957  995603 crio.go:451] Took 3.257622 seconds to extract the tarball
	I0830 22:19:31.859970  995603 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:31.917027  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:31.965752  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:31.965777  995603 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:31.965886  995603 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.965944  995603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.965980  995603 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.965879  995603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.966084  995603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.965878  995603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.965967  995603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:31.965901  995603 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968024  995603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.968045  995603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.968079  995603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.968186  995603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.968191  995603 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.968193  995603 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968248  995603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.968766  995603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.140478  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.140975  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.157997  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0830 22:19:32.159468  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.159950  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.160033  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.161682  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.255481  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:32.261235  995603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0830 22:19:32.261291  995603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.261340  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.282724  995603 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0830 22:19:32.282781  995603 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.282854  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378268  995603 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0830 22:19:32.378372  995603 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0830 22:19:32.378417  995603 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0830 22:19:32.378507  995603 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.378551  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378377  995603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0830 22:19:32.378578  995603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0830 22:19:32.378591  995603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.378600  995603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.378295  995603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378632  995603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.378439  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378657  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.468864  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.468935  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.469002  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.469032  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.469123  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.469183  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0830 22:19:32.469184  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.563508  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0830 22:19:32.563630  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0830 22:19:32.586962  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0830 22:19:32.587044  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0830 22:19:32.587059  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0830 22:19:32.587115  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0830 22:19:32.587208  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.587265  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0830 22:19:32.592221  995603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0830 22:19:32.592246  995603 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.592300  995603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0830 22:19:34.254194  995603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.661863162s)
	I0830 22:19:34.254235  995603 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0830 22:19:34.254281  995603 cache_images.go:92] LoadImages completed in 2.288490025s
	W0830 22:19:34.254418  995603 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0830 22:19:34.254514  995603 ssh_runner.go:195] Run: crio config
	I0830 22:19:34.338842  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.338876  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.338903  995603 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:34.338929  995603 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-250163 NodeName:old-k8s-version-250163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0830 22:19:34.339134  995603 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-250163"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-250163
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.10:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:34.339240  995603 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-250163 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:19:34.339313  995603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0830 22:19:34.348990  995603 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:34.349076  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:34.358084  995603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0830 22:19:34.376989  995603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:34.396552  995603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0830 22:19:34.416666  995603 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:34.421910  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:34.436393  995603 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163 for IP: 192.168.39.10
	I0830 22:19:34.436490  995603 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:34.436717  995603 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:34.436774  995603 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:34.436867  995603 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.key
	I0830 22:19:34.436944  995603 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key.713efbbe
	I0830 22:19:34.437006  995603 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key
	I0830 22:19:34.437140  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:34.437187  995603 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:34.437203  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:34.437249  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:34.437284  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:34.437320  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:34.437388  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:34.438079  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:34.470943  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:19:34.503477  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:34.533783  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:19:34.562423  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:34.594418  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:34.625417  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:34.657444  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:34.689407  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:34.719004  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:34.745856  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:32.410110  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.418241  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.418269  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:32.910053  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.915839  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.915870  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.410086  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.488115  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.488161  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.909647  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.915252  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.915284  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.409978  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.418957  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:34.418995  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.909561  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.925400  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:19:34.938760  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:19:34.938793  995192 api_server.go:131] duration metric: took 8.390990557s to wait for apiserver health ...
	I0830 22:19:34.938804  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.938813  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.941052  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:34.942805  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:34.967544  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:34.998450  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:35.012600  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:35.012681  995192 system_pods.go:61] "coredns-5dd5756b68-992p2" [83ad338b-0338-45c3-a5ed-f772d100046b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:35.012702  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [4ed4f652-47c4-4d79-b8a8-dd0cc778bce0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:19:35.012714  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [c01b9dfc-ad6f-4348-abf0-fde4a64bfa98] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:19:35.012732  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [94cbccaf-3d5a-480c-8ee0-b8af5030909d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:19:35.012748  995192 system_pods.go:61] "kube-proxy-vckmf" [03f05466-f99b-4803-9164-233bfb9e7bb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:19:35.012760  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [2c5e190d-c93b-400a-8538-e31cc2016cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:19:35.012774  995192 system_pods.go:61] "metrics-server-57f55c9bc5-p8pp2" [4eaff1be-4258-427b-a110-47dabbffecee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:19:35.012788  995192 system_pods.go:61] "storage-provisioner" [8db3da8b-8256-405d-8d9c-79fdb6da8ab2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:19:35.012800  995192 system_pods.go:74] duration metric: took 14.324835ms to wait for pod list to return data ...
	I0830 22:19:35.012814  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:35.024186  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:35.024216  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:35.024229  995192 node_conditions.go:105] duration metric: took 11.409776ms to run NodePressure ...
	I0830 22:19:35.024284  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:35.318824  995192 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324484  995192 kubeadm.go:787] kubelet initialised
	I0830 22:19:35.324512  995192 kubeadm.go:788] duration metric: took 5.656923ms waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324525  995192 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:19:35.334137  995192 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:34.810276  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:34.810797  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:34.810836  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:34.810732  996676 retry.go:31] will retry after 2.550508742s: waiting for machine to come up
	I0830 22:19:37.364002  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:37.364550  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:37.364582  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:37.364489  996676 retry.go:31] will retry after 2.230782644s: waiting for machine to come up
	I0830 22:19:34.771235  995603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:34.787672  995603 ssh_runner.go:195] Run: openssl version
	I0830 22:19:34.793400  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:34.803208  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808108  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808166  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.814296  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:34.824791  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:34.838527  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844726  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844789  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.852442  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:34.862510  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:34.875456  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880581  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880702  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.886591  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:34.897133  995603 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:34.902292  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:34.908905  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:34.915276  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:34.921204  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:34.927878  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:34.934091  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:34.940851  995603 kubeadm.go:404] StartCluster: {Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:34.940966  995603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:34.941036  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:34.978950  995603 cri.go:89] found id: ""
	I0830 22:19:34.979038  995603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:34.988290  995603 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:34.988324  995603 kubeadm.go:636] restartCluster start
	I0830 22:19:34.988403  995603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:34.998277  995603 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:34.999385  995603 kubeconfig.go:92] found "old-k8s-version-250163" server: "https://192.168.39.10:8443"
	I0830 22:19:35.002017  995603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:35.013903  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.013962  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.028780  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.028800  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.028845  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.043243  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.543986  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.544109  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.555939  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.044164  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.044259  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.055496  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.544110  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.544243  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.555999  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.043535  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.043628  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.055019  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.543435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.543546  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.558778  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.044367  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.044482  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.058777  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.543327  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.543431  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.555133  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.043720  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.043874  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.059955  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.543461  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.543625  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.558707  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.360241  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:39.363755  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:40.357373  995192 pod_ready.go:92] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:40.357396  995192 pod_ready.go:81] duration metric: took 5.023230161s waiting for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:40.357409  995192 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:39.597197  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:39.597650  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:39.597684  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:39.597603  996676 retry.go:31] will retry after 3.562835127s: waiting for machine to come up
	I0830 22:19:43.161572  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:43.162020  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:43.162054  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:43.161973  996676 retry.go:31] will retry after 5.409514109s: waiting for machine to come up
	I0830 22:19:40.044014  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.044104  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.059377  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:40.543910  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.544012  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.555295  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.043380  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.043493  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.055443  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.544046  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.544121  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.555832  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.043785  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.043876  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.054809  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.543376  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.543463  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.554254  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.043435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.043543  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.054734  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.544308  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.544418  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.555603  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.044211  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.044291  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.055403  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.544013  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.544117  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.555197  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.378396  995192 pod_ready.go:102] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:42.881428  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.881456  995192 pod_ready.go:81] duration metric: took 2.524040213s waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.881467  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892688  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.892718  995192 pod_ready.go:81] duration metric: took 11.243576ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892731  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898434  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.898463  995192 pod_ready.go:81] duration metric: took 5.721888ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898476  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904261  995192 pod_ready.go:92] pod "kube-proxy-vckmf" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.904287  995192 pod_ready.go:81] duration metric: took 5.803127ms waiting for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904299  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153736  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:43.153763  995192 pod_ready.go:81] duration metric: took 249.454932ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153777  995192 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:45.462667  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:48.575718  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576172  994624 main.go:141] libmachine: (no-preload-698195) Found IP for machine: 192.168.72.28
	I0830 22:19:48.576206  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has current primary IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576217  994624 main.go:141] libmachine: (no-preload-698195) Reserving static IP address...
	I0830 22:19:48.576671  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.576719  994624 main.go:141] libmachine: (no-preload-698195) Reserved static IP address: 192.168.72.28
	I0830 22:19:48.576754  994624 main.go:141] libmachine: (no-preload-698195) DBG | skip adding static IP to network mk-no-preload-698195 - found existing host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"}
	I0830 22:19:48.576776  994624 main.go:141] libmachine: (no-preload-698195) DBG | Getting to WaitForSSH function...
	I0830 22:19:48.576792  994624 main.go:141] libmachine: (no-preload-698195) Waiting for SSH to be available...
	I0830 22:19:48.578953  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579261  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.579290  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579398  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH client type: external
	I0830 22:19:48.579417  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa (-rw-------)
	I0830 22:19:48.579451  994624 main.go:141] libmachine: (no-preload-698195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:48.579478  994624 main.go:141] libmachine: (no-preload-698195) DBG | About to run SSH command:
	I0830 22:19:48.579493  994624 main.go:141] libmachine: (no-preload-698195) DBG | exit 0
	I0830 22:19:48.679834  994624 main.go:141] libmachine: (no-preload-698195) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:48.680237  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetConfigRaw
	I0830 22:19:48.681064  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.683388  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.683844  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.683884  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.684153  994624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/config.json ...
	I0830 22:19:48.684435  994624 machine.go:88] provisioning docker machine ...
	I0830 22:19:48.684462  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:48.684708  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.684851  994624 buildroot.go:166] provisioning hostname "no-preload-698195"
	I0830 22:19:48.684883  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.685013  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.687508  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.687975  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.688018  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.688198  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.688413  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688599  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688830  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.689061  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.689695  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.689718  994624 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-698195 && echo "no-preload-698195" | sudo tee /etc/hostname
	I0830 22:19:45.014985  995603 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:45.015030  995603 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:45.015045  995603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:45.015102  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:45.049952  995603 cri.go:89] found id: ""
	I0830 22:19:45.050039  995603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:45.065202  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:45.074198  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:45.074330  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083407  995603 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083438  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:45.211527  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.256339  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.044735651s)
	I0830 22:19:46.256389  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.469714  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.542945  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.644533  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:46.644632  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:46.659432  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.182415  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.682613  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.182661  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.206336  995603 api_server.go:72] duration metric: took 1.561801361s to wait for apiserver process to appear ...
	I0830 22:19:48.206374  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:48.206399  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:50.136893  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 1m0.108561967s
	I0830 22:19:50.136941  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:50.136952  994705 fix.go:54] fixHost starting: 
	I0830 22:19:50.137347  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:50.137386  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:50.156678  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0830 22:19:50.157148  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:50.157739  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:19:50.157765  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:50.158103  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:50.158283  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.158445  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:19:50.160098  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Running err=<nil>
	W0830 22:19:50.160115  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:50.162162  994705 out.go:177] * Updating the running kvm2 "embed-certs-208903" VM ...
	I0830 22:19:50.163634  994705 machine.go:88] provisioning docker machine ...
	I0830 22:19:50.163663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.163906  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164077  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:19:50.164104  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164288  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.166831  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167198  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.167234  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167371  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.167561  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167731  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167902  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.168108  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.168592  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.168610  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:19:50.306738  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:19:50.306772  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.309523  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.309929  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.309974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.310182  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.310349  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310638  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310845  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.311027  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.311610  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.311644  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:50.433972  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:50.434005  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:50.434045  994705 buildroot.go:174] setting up certificates
	I0830 22:19:50.434057  994705 provision.go:83] configureAuth start
	I0830 22:19:50.434069  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.434388  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:19:50.437450  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.437883  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.437916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.438115  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.440654  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.441059  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441213  994705 provision.go:138] copyHostCerts
	I0830 22:19:50.441271  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:50.441283  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:50.441352  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:50.441453  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:50.441462  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:50.441481  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:50.441563  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:50.441575  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:50.441606  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:50.441684  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:19:50.721978  994705 provision.go:172] copyRemoteCerts
	I0830 22:19:50.722039  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:50.722072  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.724893  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725257  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.725289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725571  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.725799  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.726014  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.726181  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:19:50.817217  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:50.843335  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:50.869233  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:50.897508  994705 provision.go:86] duration metric: configureAuth took 463.432948ms
	I0830 22:19:50.897544  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:50.897804  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:50.897904  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.900633  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.901040  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901210  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.901404  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901547  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901680  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.901875  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.902287  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.902310  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:51.128816  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.128855  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:19:51.128866  994705 machine.go:91] provisioned docker machine in 965.212906ms
	I0830 22:19:51.128900  994705 fix.go:56] fixHost completed within 991.948899ms
	I0830 22:19:51.128906  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 991.990648ms
	W0830 22:19:51.129050  994705 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p embed-certs-208903" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.131823  994705 out.go:177] 
	W0830 22:19:51.133957  994705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0830 22:19:51.133985  994705 out.go:239] * 
	W0830 22:19:51.134788  994705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:19:51.136212  994705 out.go:177] 
	I0830 22:19:48.842387  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-698195
	
	I0830 22:19:48.842438  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.845727  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.846100  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.846140  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.846429  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.846658  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.846856  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.846991  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.847159  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.847578  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.847601  994624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-698195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-698195/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-698195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:48.994130  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:48.994176  994624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:48.994211  994624 buildroot.go:174] setting up certificates
	I0830 22:19:48.994244  994624 provision.go:83] configureAuth start
	I0830 22:19:48.994270  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.994612  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.997772  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.998170  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.998208  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.998416  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.001089  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.001466  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.001498  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.001639  994624 provision.go:138] copyHostCerts
	I0830 22:19:49.001702  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:49.001733  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:49.001808  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:49.001927  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:49.001937  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:49.001967  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:49.002042  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:49.002057  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:49.002085  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:49.002169  994624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.no-preload-698195 san=[192.168.72.28 192.168.72.28 localhost 127.0.0.1 minikube no-preload-698195]
	I0830 22:19:49.376465  994624 provision.go:172] copyRemoteCerts
	I0830 22:19:49.376534  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:49.376565  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.379932  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.380313  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.380354  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.380486  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.380738  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.380949  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.381109  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:49.474102  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:49.496563  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:49.518034  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:49.539392  994624 provision.go:86] duration metric: configureAuth took 545.126518ms
	I0830 22:19:49.539419  994624 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:49.539623  994624 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:49.539719  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.542336  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.542665  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.542738  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.542839  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.543026  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.543217  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.543341  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.543459  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:49.543864  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:49.543882  994624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:49.869021  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:49.869051  994624 machine.go:91] provisioned docker machine in 1.184598655s
	I0830 22:19:49.869065  994624 start.go:300] post-start starting for "no-preload-698195" (driver="kvm2")
	I0830 22:19:49.869079  994624 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:49.869110  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:49.869444  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:49.869481  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.871931  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.872288  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.872333  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.872502  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.872706  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.872888  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.873027  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:49.969286  994624 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:49.973513  994624 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:49.973532  994624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:49.973598  994624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:49.973671  994624 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:49.973768  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:49.982880  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:50.006097  994624 start.go:303] post-start completed in 137.016363ms
	I0830 22:19:50.006124  994624 fix.go:56] fixHost completed within 24.947983055s
	I0830 22:19:50.006150  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.008513  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.008880  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.008908  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.009134  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.009371  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.009560  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.009755  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.009933  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.010372  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:50.010402  994624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:50.136709  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433990.121404659
	
	I0830 22:19:50.136738  994624 fix.go:206] guest clock: 1693433990.121404659
	I0830 22:19:50.136748  994624 fix.go:219] Guest: 2023-08-30 22:19:50.121404659 +0000 UTC Remote: 2023-08-30 22:19:50.006128322 +0000 UTC m=+361.306139641 (delta=115.276337ms)
	I0830 22:19:50.136792  994624 fix.go:190] guest clock delta is within tolerance: 115.276337ms
	I0830 22:19:50.136800  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 25.078698183s
	I0830 22:19:50.136834  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.137143  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:50.139834  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.140214  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.140249  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.140387  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.140890  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.141088  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.141191  994624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:50.141243  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.141312  994624 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:50.141335  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.144030  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144283  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144434  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.144462  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144598  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.144736  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.144768  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144791  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.144912  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.144987  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.145152  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.145161  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:50.145318  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.145433  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:50.257719  994624 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:50.263507  994624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:50.411574  994624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:50.418796  994624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:50.418872  994624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:50.435922  994624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:50.435943  994624 start.go:466] detecting cgroup driver to use...
	I0830 22:19:50.436022  994624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:50.450969  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:50.463538  994624 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:50.463596  994624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:50.475797  994624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:50.488143  994624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:50.586327  994624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:50.697497  994624 docker.go:212] disabling docker service ...
	I0830 22:19:50.697587  994624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:50.712369  994624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:50.726039  994624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:50.840596  994624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:50.967799  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:50.984629  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:51.006331  994624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:51.006404  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.017150  994624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:51.017241  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.028714  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.040075  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.054520  994624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:51.067179  994624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:51.077610  994624 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:51.077685  994624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:51.093337  994624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:51.104110  994624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:51.243534  994624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:51.455149  994624 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:51.455232  994624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:51.462110  994624 start.go:534] Will wait 60s for crictl version
	I0830 22:19:51.462185  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:51.468872  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:51.509838  994624 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:51.509924  994624 ssh_runner.go:195] Run: crio --version
	I0830 22:19:51.562065  994624 ssh_runner.go:195] Run: crio --version
	I0830 22:19:51.630813  994624 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:47.961668  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:50.461541  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:51.632256  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:51.636020  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:51.636430  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:51.636458  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:51.636633  994624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:51.641003  994624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:51.655539  994624 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:51.655595  994624 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:51.691423  994624 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:51.691455  994624 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:51.691508  994624 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.691795  994624 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.691800  994624 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.691932  994624 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.692015  994624 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.692204  994624 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.692383  994624 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0830 22:19:51.693156  994624 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.693256  994624 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.693294  994624 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.693393  994624 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.693613  994624 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.693700  994624 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0830 22:19:51.693767  994624 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.694704  994624 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.695502  994624 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.858227  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.862141  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.862588  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.864659  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.872937  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0830 22:19:51.885087  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.912710  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.970615  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.978831  994624 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0830 22:19:51.978883  994624 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.978930  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.004057  994624 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0830 22:19:52.004112  994624 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:52.004153  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.031261  994624 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0830 22:19:52.031330  994624 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:52.031350  994624 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0830 22:19:52.031393  994624 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:52.031456  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.031394  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168753  994624 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0830 22:19:52.168817  994624 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:52.168842  994624 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0830 22:19:52.168760  994624 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0830 22:19:52.168882  994624 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:52.168906  994624 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:52.168931  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168944  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168948  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:52.168877  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168988  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:52.169048  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:52.169156  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:52.218220  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0830 22:19:52.218353  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.235432  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:52.235565  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0830 22:19:52.235575  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:52.235692  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:52.246243  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:52.246437  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0830 22:19:52.246550  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:52.260976  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0830 22:19:52.261024  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0830 22:19:52.261041  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.261090  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.261090  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:19:52.262450  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0830 22:19:52.316437  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0830 22:19:52.316556  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:19:52.316709  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0830 22:19:52.316807  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:19:52.330026  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0830 22:19:52.330185  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0830 22:19:52.330318  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:19:53.207917  995603 api_server.go:269] stopped: https://192.168.39.10:8443/healthz: Get "https://192.168.39.10:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0830 22:19:53.207968  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:54.224442  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:54.224482  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:54.724967  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:54.732845  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0830 22:19:54.732880  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0830 22:19:55.224677  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:55.231265  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0830 22:19:55.231302  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0830 22:19:55.725325  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:55.731785  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0830 22:19:55.739996  995603 api_server.go:141] control plane version: v1.16.0
	I0830 22:19:55.740025  995603 api_server.go:131] duration metric: took 7.533643458s to wait for apiserver health ...
	I0830 22:19:55.740037  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:55.740046  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:55.742083  995603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:52.462806  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:54.462856  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:56.962847  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:55.697808  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (3.436622341s)
	I0830 22:19:55.697847  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0830 22:19:55.697882  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1: (3.381312107s)
	I0830 22:19:55.697895  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0830 22:19:55.697927  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (3.436796784s)
	I0830 22:19:55.697959  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0830 22:19:55.697985  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1: (3.381155963s)
	I0830 22:19:55.698014  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0830 22:19:55.697989  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:55.698035  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.367694611s)
	I0830 22:19:55.698065  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0830 22:19:55.698072  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:57.158231  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.460131868s)
	I0830 22:19:57.158266  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0830 22:19:57.158302  994624 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:57.158371  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:55.743724  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:55.755829  995603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:55.777604  995603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:55.792182  995603 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:55.792221  995603 system_pods.go:61] "coredns-5644d7b6d9-872nn" [acd3b375-2486-48c3-9032-6386a091128a] Running
	I0830 22:19:55.792232  995603 system_pods.go:61] "coredns-5644d7b6d9-lqn5v" [48a574c1-b546-4060-9aba-1e2bcdaf7541] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:55.792240  995603 system_pods.go:61] "etcd-old-k8s-version-250163" [8d4eb3c4-a10b-4803-a1cd-28199081480d] Running
	I0830 22:19:55.792247  995603 system_pods.go:61] "kube-apiserver-old-k8s-version-250163" [c2cb0944-0836-4419-9bcf-8b6ddcb8de4f] Running
	I0830 22:19:55.792253  995603 system_pods.go:61] "kube-controller-manager-old-k8s-version-250163" [953d90e1-21ec-47a8-916a-9641616443a3] Running
	I0830 22:19:55.792259  995603 system_pods.go:61] "kube-proxy-qg82w" [58c1bd37-de42-46db-8337-cad3969dbbe3] Running
	I0830 22:19:55.792265  995603 system_pods.go:61] "kube-scheduler-old-k8s-version-250163" [ead115ca-3faa-457a-a29d-6de753bf53ab] Running
	I0830 22:19:55.792271  995603 system_pods.go:61] "storage-provisioner" [e481c13c-17b5-4a76-8f56-01decf4d2dde] Running
	I0830 22:19:55.792278  995603 system_pods.go:74] duration metric: took 14.654143ms to wait for pod list to return data ...
	I0830 22:19:55.792291  995603 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:55.800500  995603 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:55.800529  995603 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:55.800541  995603 node_conditions.go:105] duration metric: took 8.245305ms to run NodePressure ...
	I0830 22:19:55.800572  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:56.165598  995603 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:56.173177  995603 retry.go:31] will retry after 155.771258ms: kubelet not initialised
	I0830 22:19:56.335243  995603 retry.go:31] will retry after 435.88083ms: kubelet not initialised
	I0830 22:19:56.900108  995603 retry.go:31] will retry after 318.649581ms: kubelet not initialised
	I0830 22:19:57.226618  995603 retry.go:31] will retry after 906.607144ms: kubelet not initialised
	I0830 22:19:58.169644  995603 retry.go:31] will retry after 1.480507319s: kubelet not initialised
	I0830 22:19:59.662899  995603 retry.go:31] will retry after 1.43965579s: kubelet not initialised
	I0830 22:19:59.462944  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:01.463843  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:01.109412  995603 retry.go:31] will retry after 2.769965791s: kubelet not initialised
	I0830 22:20:03.884087  995603 retry.go:31] will retry after 5.524462984s: kubelet not initialised
	I0830 22:20:03.962393  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:06.463083  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:03.920494  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.762089682s)
	I0830 22:20:03.920528  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0830 22:20:03.920563  994624 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:20:03.920618  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:20:05.471647  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.551002795s)
	I0830 22:20:05.471696  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0830 22:20:05.471725  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:20:05.471808  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:20:07.437922  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.966087689s)
	I0830 22:20:07.437952  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0830 22:20:07.437986  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:20:07.438046  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:20:09.418426  995603 retry.go:31] will retry after 8.161662984s: kubelet not initialised
	I0830 22:20:08.961616  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:10.962062  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:09.894897  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.456819743s)
	I0830 22:20:09.894932  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0830 22:20:09.895001  994624 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:20:09.895072  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:20:10.848591  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0830 22:20:10.848635  994624 cache_images.go:123] Successfully loaded all cached images
	I0830 22:20:10.848641  994624 cache_images.go:92] LoadImages completed in 19.157171696s
	I0830 22:20:10.848726  994624 ssh_runner.go:195] Run: crio config
	I0830 22:20:10.912483  994624 cni.go:84] Creating CNI manager for ""
	I0830 22:20:10.912514  994624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:20:10.912545  994624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:20:10.912574  994624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.28 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-698195 NodeName:no-preload-698195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:20:10.912729  994624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-698195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:20:10.912793  994624 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-698195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-698195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:20:10.912850  994624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:20:10.922383  994624 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:20:10.922470  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:20:10.931904  994624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0830 22:20:10.947603  994624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:20:10.963835  994624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0830 22:20:10.982645  994624 ssh_runner.go:195] Run: grep 192.168.72.28	control-plane.minikube.internal$ /etc/hosts
	I0830 22:20:10.986493  994624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:20:10.999967  994624 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195 for IP: 192.168.72.28
	I0830 22:20:11.000000  994624 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:11.000190  994624 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:20:11.000252  994624 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:20:11.000348  994624 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.key
	I0830 22:20:11.000455  994624 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.key.f951a290
	I0830 22:20:11.000518  994624 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.key
	I0830 22:20:11.000668  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:20:11.000712  994624 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:20:11.000728  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:20:11.000844  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:20:11.000881  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:20:11.000917  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:20:11.000978  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:20:11.001876  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:20:11.025256  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:20:11.048414  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:20:11.072696  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:20:11.097029  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:20:11.123653  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:20:11.152564  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:20:11.180885  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:20:11.204194  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:20:11.227365  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:20:11.249804  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:20:11.272563  994624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:20:11.289225  994624 ssh_runner.go:195] Run: openssl version
	I0830 22:20:11.295235  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:20:11.304745  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.309554  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.309615  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.314775  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:20:11.327372  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:20:11.338944  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.344731  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.344797  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.350242  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:20:11.359913  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:20:11.369367  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.373467  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.373511  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.378731  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:20:11.387877  994624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:20:11.392496  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:20:11.398057  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:20:11.403555  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:20:11.409343  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:20:11.414914  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:20:11.420465  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:20:11.425887  994624 kubeadm.go:404] StartCluster: {Name:no-preload-698195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-698195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:20:11.425988  994624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:20:11.426031  994624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:20:11.458215  994624 cri.go:89] found id: ""
	I0830 22:20:11.458307  994624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:20:11.468981  994624 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:20:11.469010  994624 kubeadm.go:636] restartCluster start
	I0830 22:20:11.469068  994624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:20:11.478113  994624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:11.479707  994624 kubeconfig.go:92] found "no-preload-698195" server: "https://192.168.72.28:8443"
	I0830 22:20:11.483097  994624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:20:11.492068  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:11.492123  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:11.502752  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:11.502766  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:11.502803  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:11.514139  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:12.014881  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:12.014982  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:12.027078  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:12.514591  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:12.514686  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:12.529329  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.014971  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:13.015068  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:13.026874  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.514310  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:13.514395  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:13.526406  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.461372  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:15.961535  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:14.014646  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:14.014750  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:14.026467  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:14.515116  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:14.515212  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:14.527110  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:15.014622  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:15.014713  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:15.026083  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:15.515205  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:15.515304  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:15.530248  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:16.014368  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:16.014472  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:16.025785  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:16.514315  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:16.514390  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:16.525823  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.014305  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:17.014410  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:17.025657  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.515255  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:17.515331  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:17.527967  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:18.014524  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:18.014603  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:18.025912  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:18.514454  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:18.514533  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:18.526034  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.586022  995603 retry.go:31] will retry after 7.910874514s: kubelet not initialised
	I0830 22:20:18.460574  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:20.460727  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:19.014477  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:19.014563  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:19.025688  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:19.514231  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:19.514318  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:19.526253  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:20.014551  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:20.014632  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:20.026223  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:20.515044  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:20.515142  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:20.526336  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:21.014933  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:21.015017  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:21.026315  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:21.492708  994624 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:20:21.492739  994624 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:20:21.492755  994624 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:20:21.492837  994624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:20:21.528882  994624 cri.go:89] found id: ""
	I0830 22:20:21.528979  994624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:20:21.545258  994624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:20:21.554325  994624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:20:21.554387  994624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:20:21.563086  994624 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:20:21.563121  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:21.688507  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.342362  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.552586  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.618512  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.699936  994624 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:20:22.700029  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:22.715983  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:23.231090  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:23.730985  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:22.462833  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:24.462913  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:26.960795  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:24.230937  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:24.730685  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:25.230888  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:25.256876  994624 api_server.go:72] duration metric: took 2.556939469s to wait for apiserver process to appear ...
	I0830 22:20:25.256907  994624 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:20:25.256929  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:25.502804  995603 retry.go:31] will retry after 19.65596925s: kubelet not initialised
	I0830 22:20:28.908329  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:20:28.908366  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:20:28.908382  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:28.973483  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:20:28.973534  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:20:29.474026  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:29.480796  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:20:29.480850  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:20:29.974406  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:29.981421  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:20:29.981453  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:20:30.474452  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:30.479311  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0830 22:20:30.490550  994624 api_server.go:141] control plane version: v1.28.1
	I0830 22:20:30.490581  994624 api_server.go:131] duration metric: took 5.233664737s to wait for apiserver health ...
	I0830 22:20:30.490621  994624 cni.go:84] Creating CNI manager for ""
	I0830 22:20:30.490634  994624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:20:30.492834  994624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:20:28.962577  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:31.461661  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:30.494469  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:20:30.508611  994624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:20:30.536470  994624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:20:30.547285  994624 system_pods.go:59] 8 kube-system pods found
	I0830 22:20:30.547321  994624 system_pods.go:61] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:20:30.547339  994624 system_pods.go:61] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:20:30.547352  994624 system_pods.go:61] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:20:30.547361  994624 system_pods.go:61] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:20:30.547369  994624 system_pods.go:61] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:20:30.547379  994624 system_pods.go:61] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:20:30.547391  994624 system_pods.go:61] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:20:30.547405  994624 system_pods.go:61] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:20:30.547416  994624 system_pods.go:74] duration metric: took 10.921869ms to wait for pod list to return data ...
	I0830 22:20:30.547428  994624 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:20:30.550787  994624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:20:30.550816  994624 node_conditions.go:123] node cpu capacity is 2
	I0830 22:20:30.550828  994624 node_conditions.go:105] duration metric: took 3.391486ms to run NodePressure ...
	I0830 22:20:30.550856  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:30.786117  994624 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:20:30.793653  994624 kubeadm.go:787] kubelet initialised
	I0830 22:20:30.793680  994624 kubeadm.go:788] duration metric: took 7.533543ms waiting for restarted kubelet to initialise ...
	I0830 22:20:30.793694  994624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:30.800474  994624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.808844  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.808869  994624 pod_ready.go:81] duration metric: took 8.371156ms waiting for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.808879  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.808888  994624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.823461  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "etcd-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.823487  994624 pod_ready.go:81] duration metric: took 14.590789ms waiting for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.823497  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "etcd-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.823504  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.834123  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-apiserver-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.834150  994624 pod_ready.go:81] duration metric: took 10.63758ms waiting for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.834158  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-apiserver-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.834164  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.951589  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.951620  994624 pod_ready.go:81] duration metric: took 117.448834ms waiting for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.951628  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.951635  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:31.343471  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-proxy-5fjvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.343497  994624 pod_ready.go:81] duration metric: took 391.855831ms waiting for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:31.343506  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-proxy-5fjvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.343512  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:31.741491  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-scheduler-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.741527  994624 pod_ready.go:81] duration metric: took 398.007277ms waiting for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:31.741539  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-scheduler-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.741555  994624 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:32.141918  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:32.141952  994624 pod_ready.go:81] duration metric: took 400.379332ms waiting for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:32.141961  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:32.141969  994624 pod_ready.go:38] duration metric: took 1.348263054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:32.141987  994624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:20:32.153800  994624 ops.go:34] apiserver oom_adj: -16
	I0830 22:20:32.153828  994624 kubeadm.go:640] restartCluster took 20.684809572s
	I0830 22:20:32.153848  994624 kubeadm.go:406] StartCluster complete in 20.727972693s
	I0830 22:20:32.153868  994624 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:32.153955  994624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:20:32.155765  994624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:32.156054  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:20:32.156162  994624 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:20:32.156265  994624 addons.go:69] Setting storage-provisioner=true in profile "no-preload-698195"
	I0830 22:20:32.156285  994624 addons.go:231] Setting addon storage-provisioner=true in "no-preload-698195"
	I0830 22:20:32.156288  994624 addons.go:69] Setting default-storageclass=true in profile "no-preload-698195"
	I0830 22:20:32.156307  994624 addons.go:69] Setting metrics-server=true in profile "no-preload-698195"
	I0830 22:20:32.156344  994624 addons.go:231] Setting addon metrics-server=true in "no-preload-698195"
	I0830 22:20:32.156318  994624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-698195"
	I0830 22:20:32.156396  994624 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	W0830 22:20:32.156293  994624 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:20:32.156512  994624 host.go:66] Checking if "no-preload-698195" exists ...
	W0830 22:20:32.156358  994624 addons.go:240] addon metrics-server should already be in state true
	I0830 22:20:32.156570  994624 host.go:66] Checking if "no-preload-698195" exists ...
	I0830 22:20:32.156821  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156847  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.156849  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156867  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.156948  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156961  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.165443  994624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-698195" context rescaled to 1 replicas
	I0830 22:20:32.165497  994624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:20:32.167715  994624 out.go:177] * Verifying Kubernetes components...
	I0830 22:20:32.169310  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:20:32.176341  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0830 22:20:32.176876  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0830 22:20:32.177070  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0830 22:20:32.177253  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177447  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177562  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177829  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.177856  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.178014  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.178032  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.178387  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179460  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179499  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.179517  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.179897  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179957  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.179996  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.180272  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.180293  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.180423  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.201009  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0830 22:20:32.201548  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.201926  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0830 22:20:32.202180  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.202200  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.202304  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.202785  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.202842  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.202865  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.203052  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.203202  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.203391  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.204424  994624 addons.go:231] Setting addon default-storageclass=true in "no-preload-698195"
	W0830 22:20:32.204450  994624 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:20:32.204491  994624 host.go:66] Checking if "no-preload-698195" exists ...
	I0830 22:20:32.204897  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.204931  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.205076  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.207516  994624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:20:32.206126  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.209336  994624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:20:32.210840  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:20:32.209276  994624 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:20:32.210862  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:20:32.210877  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:20:32.210890  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.210897  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.214370  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214385  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214769  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.214813  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.214829  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214841  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.215131  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.215199  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.215346  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.215387  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.215521  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.215580  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.215651  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.215748  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.244173  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I0830 22:20:32.244664  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.245311  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.245343  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.245760  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.246361  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.246416  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.263737  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0830 22:20:32.264177  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.264737  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.264761  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.265106  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.265342  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.266996  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.267406  994624 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:20:32.267430  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:20:32.267454  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.270345  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.270799  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.270829  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.271021  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.271215  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.271403  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.271526  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.362089  994624 node_ready.go:35] waiting up to 6m0s for node "no-preload-698195" to be "Ready" ...
	I0830 22:20:32.362281  994624 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0830 22:20:32.371216  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:20:32.372220  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:20:32.372240  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:20:32.396916  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:20:32.396942  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:20:32.417651  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:20:32.430668  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:20:32.430699  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:20:32.476147  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:20:33.655453  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.284190116s)
	I0830 22:20:33.655495  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.237806074s)
	I0830 22:20:33.655515  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655532  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.655519  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655602  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.655854  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.655875  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.655885  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655894  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656045  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656082  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656095  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656115  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656160  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656169  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656180  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.656195  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656394  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656432  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656437  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656455  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.656465  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656729  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656741  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656754  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.802947  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326756295s)
	I0830 22:20:33.802994  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.803016  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.803349  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.803371  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.803381  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.803391  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.803393  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.803632  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.803682  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.803700  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.803720  994624 addons.go:467] Verifying addon metrics-server=true in "no-preload-698195"
	I0830 22:20:33.805489  994624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:20:33.462414  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:35.961487  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:33.806934  994624 addons.go:502] enable addons completed in 1.650789204s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:20:34.550814  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:36.551274  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:38.551355  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:37.963175  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:40.462510  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:39.550464  994624 node_ready.go:49] node "no-preload-698195" has status "Ready":"True"
	I0830 22:20:39.550505  994624 node_ready.go:38] duration metric: took 7.188369926s waiting for node "no-preload-698195" to be "Ready" ...
	I0830 22:20:39.550516  994624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:39.556533  994624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.562470  994624 pod_ready.go:92] pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:39.562498  994624 pod_ready.go:81] duration metric: took 5.934964ms waiting for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.562511  994624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.568348  994624 pod_ready.go:92] pod "etcd-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:39.568371  994624 pod_ready.go:81] duration metric: took 5.853085ms waiting for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.568380  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:41.593857  994624 pod_ready.go:102] pod "kube-apiserver-no-preload-698195" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:42.594544  994624 pod_ready.go:92] pod "kube-apiserver-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.594572  994624 pod_ready.go:81] duration metric: took 3.026185311s waiting for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.594586  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.599820  994624 pod_ready.go:92] pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.599844  994624 pod_ready.go:81] duration metric: took 5.249213ms waiting for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.599856  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.751073  994624 pod_ready.go:92] pod "kube-proxy-5fjvd" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.751096  994624 pod_ready.go:81] duration metric: took 151.233562ms waiting for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.751105  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:43.150620  994624 pod_ready.go:92] pod "kube-scheduler-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:43.150646  994624 pod_ready.go:81] duration metric: took 399.535323ms waiting for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:43.150656  994624 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.464235  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:44.960831  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:46.961923  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:45.458489  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:47.958322  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:45.165236  995603 kubeadm.go:787] kubelet initialised
	I0830 22:20:45.165261  995603 kubeadm.go:788] duration metric: took 48.999634631s waiting for restarted kubelet to initialise ...
	I0830 22:20:45.165269  995603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:45.170939  995603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.176235  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.176259  995603 pod_ready.go:81] duration metric: took 5.296469ms waiting for pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.176271  995603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.180703  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.180718  995603 pod_ready.go:81] duration metric: took 4.44114ms waiting for pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.180725  995603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.185225  995603 pod_ready.go:92] pod "etcd-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.185244  995603 pod_ready.go:81] duration metric: took 4.512736ms waiting for pod "etcd-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.185255  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.190403  995603 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.190425  995603 pod_ready.go:81] duration metric: took 5.162774ms waiting for pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.190436  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.564427  995603 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.564460  995603 pod_ready.go:81] duration metric: took 374.00421ms waiting for pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.564473  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qg82w" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.964836  995603 pod_ready.go:92] pod "kube-proxy-qg82w" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.964857  995603 pod_ready.go:81] duration metric: took 400.377393ms waiting for pod "kube-proxy-qg82w" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.964866  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:46.364023  995603 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:46.364046  995603 pod_ready.go:81] duration metric: took 399.172301ms waiting for pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:46.364060  995603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:48.672124  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:48.962198  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.461425  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:49.958485  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.959424  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.170855  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:53.172690  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:53.962708  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:56.461729  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:54.458026  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:56.458124  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:58.459811  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:55.669393  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:57.670454  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:59.670654  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:58.463098  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:00.962495  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:00.960274  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:03.457998  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:02.170872  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:04.670725  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:03.460674  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:05.461496  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:05.459727  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:07.959179  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:06.671066  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.169869  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:07.463765  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.961943  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.959351  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:12.458921  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:11.171435  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:13.171597  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:12.461881  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:14.961416  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:14.459572  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:16.960064  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:15.670176  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:18.170049  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:17.460985  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:19.462323  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:21.963325  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:19.459085  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:21.460169  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:20.671600  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:23.169931  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:24.464683  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:26.962740  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:23.958014  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:26.458502  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:28.458654  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:25.670985  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:28.171321  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:29.461798  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:31.961714  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:30.464431  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:32.958557  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:30.669588  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:32.670695  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.671313  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.463531  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:36.960658  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.960256  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:37.460047  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:37.168958  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:39.170995  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:38.961145  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:40.961870  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:39.958213  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:41.958373  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:41.670302  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:44.171198  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:43.461666  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:45.461738  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:44.459123  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:46.459226  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:48.459428  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:46.670708  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:48.671826  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:47.462306  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:49.462771  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:51.962010  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:50.958149  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:52.958493  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:51.169610  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:53.170386  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:54.461116  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:56.959735  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:54.959069  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:57.458784  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:55.172123  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:57.670323  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:59.671985  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:58.961225  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:00.961822  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:59.959058  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:01.959700  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:02.170880  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:04.171473  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:02.961938  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:05.461758  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:03.960213  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:06.458196  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:08.458500  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:06.671998  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:09.169979  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:07.962031  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:10.460716  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:10.960753  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:13.459638  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:11.669885  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:13.670821  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:12.461433  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:14.463156  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:16.961558  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:15.459765  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:17.959192  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:15.671350  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:18.170569  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:19.462375  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:21.961785  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:19.959308  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:22.457592  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:20.173424  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:22.671008  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:23.961985  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:25.962149  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:24.458343  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:26.958471  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:25.169264  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:27.181579  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:29.670923  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:27.964954  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:30.461530  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:29.458262  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:31.463334  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:32.171662  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:34.670239  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:32.961287  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:34.961787  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:33.957827  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:35.958367  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:37.960259  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:36.671642  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:39.169834  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:37.462107  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:39.961576  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:41.961773  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:40.458367  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:42.458710  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:41.671303  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:44.170994  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:43.964448  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.461777  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:44.958652  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.960005  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.171108  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:48.670866  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:48.462315  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:50.462456  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:49.459011  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:51.958137  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:51.170020  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:53.171135  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:52.462694  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:54.962055  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:53.958728  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:55.959278  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:57.959555  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:55.671421  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:58.169881  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:57.461322  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:59.461865  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:01.963541  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:00.458148  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:02.458834  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:00.170265  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:02.170719  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:04.670111  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:03.967458  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:05.972793  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:04.958722  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:07.458954  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:06.670434  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:08.671269  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:08.461195  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:10.961859  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:09.458999  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:11.958146  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:11.170482  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.670156  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.462648  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.463851  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.958659  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.962293  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:18.458707  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.670647  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:18.170462  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:17.960881  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:19.962032  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:20.959370  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:23.459653  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:20.670329  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:23.169817  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:22.461024  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:24.461537  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:26.960897  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:25.958696  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:28.459488  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:25.671024  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:28.170228  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:29.461009  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:31.461891  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:30.958318  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:32.958723  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:30.170683  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:32.670966  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:33.462005  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:35.960841  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:34.959278  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.458068  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:35.170093  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.671411  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.961501  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:40.460893  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:39.458824  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:41.461623  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:40.170169  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:42.670892  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:42.461840  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:43.154742  995192 pod_ready.go:81] duration metric: took 4m0.000931927s waiting for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	E0830 22:23:43.154776  995192 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:23:43.154798  995192 pod_ready.go:38] duration metric: took 4m7.830262728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:23:43.154853  995192 kubeadm.go:640] restartCluster took 4m30.336637887s
	W0830 22:23:43.154961  995192 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0830 22:23:43.155001  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0830 22:23:43.959940  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:46.458406  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:45.170898  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:47.670457  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:48.957451  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:51.457818  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:50.171371  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:52.171468  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:54.670175  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:53.958105  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:56.458176  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:57.169990  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:59.177173  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:58.957583  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:00.958404  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:02.958866  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:01.670484  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:03.671368  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:05.457466  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:07.457893  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:05.671480  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:08.170128  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:09.458376  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:11.958335  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:10.171221  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:12.171398  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:14.171694  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:14.432406  995192 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.277378744s)
	I0830 22:24:14.432498  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:14.446038  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:24:14.455354  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:24:14.464292  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:24:14.464332  995192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 22:24:14.680764  995192 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:24:13.965662  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:16.460984  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:16.171891  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:18.671072  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:18.958205  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:20.959096  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:23.459244  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:20.671733  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:22.671947  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:24.677772  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:24.927380  995192 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 22:24:24.927462  995192 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:24:24.927559  995192 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:24:24.927697  995192 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:24:24.927843  995192 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:24:24.927938  995192 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:24:24.929775  995192 out.go:204]   - Generating certificates and keys ...
	I0830 22:24:24.929895  995192 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:24:24.930004  995192 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:24:24.930118  995192 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 22:24:24.930202  995192 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0830 22:24:24.930321  995192 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0830 22:24:24.930408  995192 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0830 22:24:24.930485  995192 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0830 22:24:24.930559  995192 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0830 22:24:24.930658  995192 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 22:24:24.930756  995192 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 22:24:24.930821  995192 kubeadm.go:322] [certs] Using the existing "sa" key
	I0830 22:24:24.930922  995192 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:24:24.931009  995192 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:24:24.931077  995192 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:24:24.931170  995192 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:24:24.931245  995192 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:24:24.931354  995192 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:24:24.931430  995192 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:24:24.934341  995192 out.go:204]   - Booting up control plane ...
	I0830 22:24:24.934422  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:24:24.934524  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:24:24.934580  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:24:24.934689  995192 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:24:24.934770  995192 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:24:24.934809  995192 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:24:24.934936  995192 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:24:24.935014  995192 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003378 seconds
	I0830 22:24:24.935150  995192 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:24:24.935261  995192 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:24:24.935317  995192 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:24:24.935490  995192 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-791007 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 22:24:24.935540  995192 kubeadm.go:322] [bootstrap-token] Using token: 3t39h1.cgypp2756rpdn3ql
	I0830 22:24:24.937035  995192 out.go:204]   - Configuring RBAC rules ...
	I0830 22:24:24.937140  995192 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:24:24.937246  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 22:24:24.937428  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:24:24.937619  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:24:24.937762  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:24:24.937883  995192 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:24:24.938044  995192 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 22:24:24.938105  995192 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:24:24.938178  995192 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:24:24.938197  995192 kubeadm.go:322] 
	I0830 22:24:24.938277  995192 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:24:24.938290  995192 kubeadm.go:322] 
	I0830 22:24:24.938389  995192 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:24:24.938398  995192 kubeadm.go:322] 
	I0830 22:24:24.938429  995192 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:24:24.938506  995192 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:24:24.938577  995192 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:24:24.938586  995192 kubeadm.go:322] 
	I0830 22:24:24.938658  995192 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 22:24:24.938681  995192 kubeadm.go:322] 
	I0830 22:24:24.938745  995192 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 22:24:24.938754  995192 kubeadm.go:322] 
	I0830 22:24:24.938825  995192 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:24:24.938930  995192 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:24:24.939065  995192 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:24:24.939076  995192 kubeadm.go:322] 
	I0830 22:24:24.939160  995192 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 22:24:24.939266  995192 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:24:24.939280  995192 kubeadm.go:322] 
	I0830 22:24:24.939367  995192 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 3t39h1.cgypp2756rpdn3ql \
	I0830 22:24:24.939452  995192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:24:24.939473  995192 kubeadm.go:322] 	--control-plane 
	I0830 22:24:24.939479  995192 kubeadm.go:322] 
	I0830 22:24:24.939597  995192 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:24:24.939606  995192 kubeadm.go:322] 
	I0830 22:24:24.939685  995192 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 3t39h1.cgypp2756rpdn3ql \
	I0830 22:24:24.939848  995192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:24:24.939880  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:24:24.939916  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:24:24.942544  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:24:24.943961  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:24:24.990449  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:24:25.040966  995192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:24:25.041042  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.041041  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=default-k8s-diff-port-791007 minikube.k8s.io/updated_at=2023_08_30T22_24_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.441321  995192 ops.go:34] apiserver oom_adj: -16
	I0830 22:24:25.441492  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.557357  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:26.163303  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:26.663721  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.459794  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.957287  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.171894  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:29.671326  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.163474  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:27.664036  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:28.163187  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:28.663338  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.163719  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.663846  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:30.163288  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:30.663346  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:31.163165  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:31.663996  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.958583  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:31.960227  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:31.671923  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:34.171143  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:32.163631  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:32.663347  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:33.163634  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:33.663228  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:34.163600  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:34.663994  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:35.163597  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:35.663419  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:36.163764  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:36.663168  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:37.163646  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:37.663613  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:38.163643  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:38.264223  995192 kubeadm.go:1081] duration metric: took 13.22324453s to wait for elevateKubeSystemPrivileges.
	I0830 22:24:38.264262  995192 kubeadm.go:406] StartCluster complete in 5m25.484553135s
	I0830 22:24:38.264286  995192 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:24:38.264411  995192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:24:38.266553  995192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:24:38.266800  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:24:38.266990  995192 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:24:38.267105  995192 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267117  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:24:38.267126  995192 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.267141  995192 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:24:38.267163  995192 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267184  995192 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267209  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.267214  995192 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.267234  995192 addons.go:240] addon metrics-server should already be in state true
	I0830 22:24:38.267207  995192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-791007"
	I0830 22:24:38.267330  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.267664  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267735  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.267806  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267797  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267851  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.267866  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.285812  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0830 22:24:38.286287  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.287008  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.287036  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.287384  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0830 22:24:38.287488  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I0830 22:24:38.287526  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.287808  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.287949  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.288154  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.288200  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.288370  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.288516  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.288582  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.288562  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.288947  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.289135  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.289343  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.289569  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.289610  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.299364  995192 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.299392  995192 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:24:38.299422  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.299824  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.299861  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.305325  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0830 22:24:38.305834  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.306214  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I0830 22:24:38.306525  995192 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-791007" context rescaled to 1 replicas
	I0830 22:24:38.306561  995192 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:24:38.308424  995192 out.go:177] * Verifying Kubernetes components...
	I0830 22:24:38.306646  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.306688  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.309840  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:38.309911  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.310245  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.310362  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.310381  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.310433  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.310801  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.310980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.312319  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.314072  995192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:24:38.313018  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.315723  995192 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:24:38.315742  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:24:38.315759  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.317188  995192 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:24:34.457685  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:36.458268  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.459052  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:36.171434  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.173228  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.318441  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:24:38.318465  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:24:38.318488  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.319537  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.320338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.320365  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.320640  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.321238  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.321431  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.321733  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.322284  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.322691  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.322778  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.322887  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.323058  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.323205  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.323265  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.328412  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0830 22:24:38.328853  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.329468  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.329479  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.329898  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.330379  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.330395  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.345318  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0830 22:24:38.345781  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.346309  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.346329  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.346665  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.346886  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.348620  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.348922  995192 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:24:38.348941  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:24:38.348961  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.351758  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.352206  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.352233  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.352357  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.352562  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.352787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.352918  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.474078  995192 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-791007" to be "Ready" ...
	I0830 22:24:38.474205  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:24:38.479269  995192 node_ready.go:49] node "default-k8s-diff-port-791007" has status "Ready":"True"
	I0830 22:24:38.479294  995192 node_ready.go:38] duration metric: took 5.181356ms waiting for node "default-k8s-diff-port-791007" to be "Ready" ...
	I0830 22:24:38.479305  995192 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:38.486715  995192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ck692" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:38.508419  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:24:38.508443  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:24:38.515075  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:24:38.532789  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:24:38.549460  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:24:38.549488  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:24:38.593580  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:24:38.593614  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:24:38.637965  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:24:40.093211  995192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.618968297s)
	I0830 22:24:40.093259  995192 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0830 22:24:40.526723  995192 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ck692" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ck692" not found
	I0830 22:24:40.526748  995192 pod_ready.go:81] duration metric: took 2.040009497s waiting for pod "coredns-5dd5756b68-ck692" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:40.526757  995192 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ck692" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ck692" not found
	I0830 22:24:40.526765  995192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:40.552258  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.037149365s)
	I0830 22:24:40.552312  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019488451s)
	I0830 22:24:40.552317  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552381  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552351  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552696  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.552714  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.552724  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552734  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552891  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.552902  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.552918  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552927  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.553018  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.553114  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553132  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.553170  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.553202  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553210  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.553219  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.553225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.553478  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553493  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.776628  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.138598233s)
	I0830 22:24:40.776714  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.776731  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.777199  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.777224  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.777246  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.777256  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.777270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.777546  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.777626  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.777647  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.777667  995192 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-791007"
	I0830 22:24:40.779719  995192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:24:40.781134  995192 addons.go:502] enable addons completed in 2.51415908s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:24:40.459185  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:42.958731  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.150847  994624 pod_ready.go:81] duration metric: took 4m0.000170406s waiting for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:43.150881  994624 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:24:43.150893  994624 pod_ready.go:38] duration metric: took 4m3.600363648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:43.150919  994624 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:24:43.150964  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:43.151043  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:43.199383  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:43.199412  994624 cri.go:89] found id: ""
	I0830 22:24:43.199420  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:43.199479  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.204289  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:43.204371  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:43.247303  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:43.247329  994624 cri.go:89] found id: ""
	I0830 22:24:43.247340  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:43.247396  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.252955  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:43.253024  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:43.286292  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:43.286318  994624 cri.go:89] found id: ""
	I0830 22:24:43.286327  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:43.286386  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.290585  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:43.290653  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:43.323616  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:43.323645  994624 cri.go:89] found id: ""
	I0830 22:24:43.323655  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:43.323729  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.328256  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:43.328326  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:43.363566  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:43.363595  994624 cri.go:89] found id: ""
	I0830 22:24:43.363605  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:43.363666  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.368006  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:43.368067  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:43.403728  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:43.403752  994624 cri.go:89] found id: ""
	I0830 22:24:43.403761  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:43.403833  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.407957  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:43.408020  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:43.438864  994624 cri.go:89] found id: ""
	I0830 22:24:43.438893  994624 logs.go:284] 0 containers: []
	W0830 22:24:43.438903  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:43.438911  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:43.438976  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:43.478905  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:43.478935  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:43.478942  994624 cri.go:89] found id: ""
	I0830 22:24:43.478951  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:43.479015  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.486919  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.496040  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:43.496070  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:43.669727  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:43.669764  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:43.712471  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:43.712508  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:43.746949  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:43.746988  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:42.573674  995192 pod_ready.go:92] pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.573706  995192 pod_ready.go:81] duration metric: took 2.046935361s waiting for pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.573716  995192 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.579433  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.579450  995192 pod_ready.go:81] duration metric: took 5.72841ms waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.579458  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.584499  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.584519  995192 pod_ready.go:81] duration metric: took 5.055504ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.584527  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.678045  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.678071  995192 pod_ready.go:81] duration metric: took 93.537153ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.678084  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbdvk" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.082548  995192 pod_ready.go:92] pod "kube-proxy-bbdvk" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:43.082576  995192 pod_ready.go:81] duration metric: took 404.485397ms waiting for pod "kube-proxy-bbdvk" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.082585  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.479813  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:43.479840  995192 pod_ready.go:81] duration metric: took 397.248046ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.479851  995192 pod_ready.go:38] duration metric: took 5.000533366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:43.479872  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:24:43.479956  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:24:43.498558  995192 api_server.go:72] duration metric: took 5.191959207s to wait for apiserver process to appear ...
	I0830 22:24:43.498583  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:24:43.498603  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:24:43.504260  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:24:43.505566  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:24:43.505589  995192 api_server.go:131] duration metric: took 6.997863ms to wait for apiserver health ...
	I0830 22:24:43.505598  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:24:43.682798  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:24:43.682837  995192 system_pods.go:61] "coredns-5dd5756b68-jwn87" [984f4b65-9261-4952-a368-5fac2fa14bd7] Running
	I0830 22:24:43.682846  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [156cdcfd-fa81-4542-8506-18b3ab61f725] Running
	I0830 22:24:43.682856  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [841dcf3a-9ab5-4fbf-a20a-4179d4a793fd] Running
	I0830 22:24:43.682863  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [4cef1264-90fb-47fc-a155-4cb267c961aa] Running
	I0830 22:24:43.682870  995192 system_pods.go:61] "kube-proxy-bbdvk" [dd98a34a-f2f9-4e73-a751-e68a1addb89f] Running
	I0830 22:24:43.682876  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [11bf5dce-8d54-4029-a9d2-423e278b6181] Running
	I0830 22:24:43.682887  995192 system_pods.go:61] "metrics-server-57f55c9bc5-dllmg" [6826d918-a2ac-4744-8145-f6d7599499af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:43.682897  995192 system_pods.go:61] "storage-provisioner" [fb41168e-19d2-4b57-a2fb-ab0b3d0ff836] Running
	I0830 22:24:43.682909  995192 system_pods.go:74] duration metric: took 177.304345ms to wait for pod list to return data ...
	I0830 22:24:43.682919  995192 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:24:43.878616  995192 default_sa.go:45] found service account: "default"
	I0830 22:24:43.878643  995192 default_sa.go:55] duration metric: took 195.70884ms for default service account to be created ...
	I0830 22:24:43.878654  995192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:24:44.083123  995192 system_pods.go:86] 8 kube-system pods found
	I0830 22:24:44.083155  995192 system_pods.go:89] "coredns-5dd5756b68-jwn87" [984f4b65-9261-4952-a368-5fac2fa14bd7] Running
	I0830 22:24:44.083161  995192 system_pods.go:89] "etcd-default-k8s-diff-port-791007" [156cdcfd-fa81-4542-8506-18b3ab61f725] Running
	I0830 22:24:44.083165  995192 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-791007" [841dcf3a-9ab5-4fbf-a20a-4179d4a793fd] Running
	I0830 22:24:44.083170  995192 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-791007" [4cef1264-90fb-47fc-a155-4cb267c961aa] Running
	I0830 22:24:44.083177  995192 system_pods.go:89] "kube-proxy-bbdvk" [dd98a34a-f2f9-4e73-a751-e68a1addb89f] Running
	I0830 22:24:44.083181  995192 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-791007" [11bf5dce-8d54-4029-a9d2-423e278b6181] Running
	I0830 22:24:44.083187  995192 system_pods.go:89] "metrics-server-57f55c9bc5-dllmg" [6826d918-a2ac-4744-8145-f6d7599499af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:44.083194  995192 system_pods.go:89] "storage-provisioner" [fb41168e-19d2-4b57-a2fb-ab0b3d0ff836] Running
	I0830 22:24:44.083203  995192 system_pods.go:126] duration metric: took 204.542978ms to wait for k8s-apps to be running ...
	I0830 22:24:44.083216  995192 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:24:44.083297  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:44.098110  995192 system_svc.go:56] duration metric: took 14.88196ms WaitForService to wait for kubelet.
	I0830 22:24:44.098143  995192 kubeadm.go:581] duration metric: took 5.7915497s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:24:44.098211  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:24:44.278751  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:24:44.278802  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:24:44.278814  995192 node_conditions.go:105] duration metric: took 180.597923ms to run NodePressure ...
	I0830 22:24:44.278825  995192 start.go:228] waiting for startup goroutines ...
	I0830 22:24:44.278831  995192 start.go:233] waiting for cluster config update ...
	I0830 22:24:44.278841  995192 start.go:242] writing updated cluster config ...
	I0830 22:24:44.279208  995192 ssh_runner.go:195] Run: rm -f paused
	I0830 22:24:44.332074  995192 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:24:44.334502  995192 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-791007" cluster and "default" namespace by default
	I0830 22:24:40.672327  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.171136  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.780116  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:43.780147  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:43.824462  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:43.824494  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:43.875847  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:43.875881  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:43.937533  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:43.937582  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:43.950917  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:43.950948  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:43.989236  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:43.989265  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:44.025171  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:44.025218  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:44.644566  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:44.644609  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:44.692321  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:44.692356  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:47.229304  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:24:47.252442  994624 api_server.go:72] duration metric: took 4m15.086891336s to wait for apiserver process to appear ...
	I0830 22:24:47.252476  994624 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:24:47.252521  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:47.252593  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:47.286367  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:47.286397  994624 cri.go:89] found id: ""
	I0830 22:24:47.286410  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:47.286461  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.290812  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:47.290883  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:47.324349  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:47.324376  994624 cri.go:89] found id: ""
	I0830 22:24:47.324386  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:47.324440  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.329002  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:47.329072  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:47.362954  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:47.362985  994624 cri.go:89] found id: ""
	I0830 22:24:47.362996  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:47.363062  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.367498  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:47.367587  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:47.398450  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:47.398478  994624 cri.go:89] found id: ""
	I0830 22:24:47.398489  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:47.398550  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.402646  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:47.402741  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:47.438663  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:47.438691  994624 cri.go:89] found id: ""
	I0830 22:24:47.438701  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:47.438769  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.443046  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:47.443114  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:47.472698  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:47.472725  994624 cri.go:89] found id: ""
	I0830 22:24:47.472733  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:47.472792  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.477075  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:47.477150  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:47.507099  994624 cri.go:89] found id: ""
	I0830 22:24:47.507138  994624 logs.go:284] 0 containers: []
	W0830 22:24:47.507148  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:47.507157  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:47.507232  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:47.540635  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:47.540661  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:47.540667  994624 cri.go:89] found id: ""
	I0830 22:24:47.540676  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:47.540734  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.545274  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.549659  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:47.549681  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:47.605419  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:47.605460  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:47.646819  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:47.646856  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:47.684772  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:47.684801  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:47.731741  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:47.731791  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:47.762713  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:47.762745  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:48.266510  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:48.266557  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:48.315124  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:48.315164  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:48.332407  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:48.332447  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:48.463670  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:48.463710  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:48.498034  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:48.498067  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:48.528326  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:48.528372  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:48.563858  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:48.563893  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:45.670559  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:46.364206  995603 pod_ready.go:81] duration metric: took 4m0.000126235s waiting for pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:46.364246  995603 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:24:46.364267  995603 pod_ready.go:38] duration metric: took 4m1.19899008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:46.364298  995603 kubeadm.go:640] restartCluster took 5m11.375966766s
	W0830 22:24:46.364364  995603 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0830 22:24:46.364394  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0830 22:24:51.095064  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:24:51.106674  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0830 22:24:51.108320  994624 api_server.go:141] control plane version: v1.28.1
	I0830 22:24:51.108339  994624 api_server.go:131] duration metric: took 3.855856321s to wait for apiserver health ...
	I0830 22:24:51.108347  994624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:24:51.108375  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:51.108422  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:51.140030  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:51.140059  994624 cri.go:89] found id: ""
	I0830 22:24:51.140069  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:51.140133  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.144302  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:51.144375  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:51.181915  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:51.181944  994624 cri.go:89] found id: ""
	I0830 22:24:51.181953  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:51.182007  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.187104  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:51.187171  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:51.220763  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:51.220794  994624 cri.go:89] found id: ""
	I0830 22:24:51.220806  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:51.220890  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.225368  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:51.225443  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:51.263131  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:51.263155  994624 cri.go:89] found id: ""
	I0830 22:24:51.263164  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:51.263231  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.268531  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:51.268586  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:51.307119  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:51.307145  994624 cri.go:89] found id: ""
	I0830 22:24:51.307154  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:51.307224  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.311914  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:51.311988  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:51.341363  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:51.341391  994624 cri.go:89] found id: ""
	I0830 22:24:51.341402  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:51.341461  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.345501  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:51.345570  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:51.378276  994624 cri.go:89] found id: ""
	I0830 22:24:51.378311  994624 logs.go:284] 0 containers: []
	W0830 22:24:51.378322  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:51.378329  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:51.378398  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:51.416207  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:51.416228  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:51.416232  994624 cri.go:89] found id: ""
	I0830 22:24:51.416245  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:51.416295  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.421034  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.424911  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:51.424938  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:51.458543  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:51.458576  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:51.489189  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:51.489223  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:52.074879  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:52.074924  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:52.091316  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:52.091357  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:52.131564  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:52.131602  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:52.168850  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:52.168879  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:52.200329  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:52.200358  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:52.230767  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:52.230799  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:52.276139  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:52.276177  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:52.330487  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:52.330523  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:52.469305  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:52.469336  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:52.536395  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:52.536432  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:55.089149  994624 system_pods.go:59] 8 kube-system pods found
	I0830 22:24:55.089184  994624 system_pods.go:61] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running
	I0830 22:24:55.089194  994624 system_pods.go:61] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running
	I0830 22:24:55.089198  994624 system_pods.go:61] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running
	I0830 22:24:55.089203  994624 system_pods.go:61] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running
	I0830 22:24:55.089207  994624 system_pods.go:61] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running
	I0830 22:24:55.089211  994624 system_pods.go:61] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running
	I0830 22:24:55.089217  994624 system_pods.go:61] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:55.089224  994624 system_pods.go:61] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running
	I0830 22:24:55.089230  994624 system_pods.go:74] duration metric: took 3.980877363s to wait for pod list to return data ...
	I0830 22:24:55.089237  994624 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:24:55.091833  994624 default_sa.go:45] found service account: "default"
	I0830 22:24:55.091862  994624 default_sa.go:55] duration metric: took 2.618667ms for default service account to be created ...
	I0830 22:24:55.091871  994624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:24:55.098108  994624 system_pods.go:86] 8 kube-system pods found
	I0830 22:24:55.098145  994624 system_pods.go:89] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running
	I0830 22:24:55.098154  994624 system_pods.go:89] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running
	I0830 22:24:55.098163  994624 system_pods.go:89] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running
	I0830 22:24:55.098179  994624 system_pods.go:89] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running
	I0830 22:24:55.098190  994624 system_pods.go:89] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running
	I0830 22:24:55.098201  994624 system_pods.go:89] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running
	I0830 22:24:55.098212  994624 system_pods.go:89] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:55.098233  994624 system_pods.go:89] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running
	I0830 22:24:55.098241  994624 system_pods.go:126] duration metric: took 6.364144ms to wait for k8s-apps to be running ...
	I0830 22:24:55.098250  994624 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:24:55.098297  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:55.114382  994624 system_svc.go:56] duration metric: took 16.118629ms WaitForService to wait for kubelet.
	I0830 22:24:55.114413  994624 kubeadm.go:581] duration metric: took 4m22.94887118s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:24:55.114435  994624 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:24:55.118227  994624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:24:55.118256  994624 node_conditions.go:123] node cpu capacity is 2
	I0830 22:24:55.118272  994624 node_conditions.go:105] duration metric: took 3.832437ms to run NodePressure ...
	I0830 22:24:55.118287  994624 start.go:228] waiting for startup goroutines ...
	I0830 22:24:55.118295  994624 start.go:233] waiting for cluster config update ...
	I0830 22:24:55.118309  994624 start.go:242] writing updated cluster config ...
	I0830 22:24:55.118611  994624 ssh_runner.go:195] Run: rm -f paused
	I0830 22:24:55.169756  994624 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:24:55.172028  994624 out.go:177] * Done! kubectl is now configured to use "no-preload-698195" cluster and "default" namespace by default
	I0830 22:25:09.359961  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (22.995525599s)
	I0830 22:25:09.360040  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:25:09.375757  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:25:09.385118  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:25:09.394601  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:25:09.394640  995603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0830 22:25:09.454824  995603 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0830 22:25:09.455022  995603 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:25:09.599893  995603 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:25:09.600055  995603 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:25:09.600213  995603 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:25:09.783920  995603 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:25:09.784082  995603 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:25:09.793193  995603 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0830 22:25:09.902777  995603 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:25:09.904820  995603 out.go:204]   - Generating certificates and keys ...
	I0830 22:25:09.904937  995603 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:25:09.905035  995603 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:25:09.905150  995603 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 22:25:09.905241  995603 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0830 22:25:09.905350  995603 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0830 22:25:09.905423  995603 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0830 22:25:09.905540  995603 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0830 22:25:09.905622  995603 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0830 22:25:09.905799  995603 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 22:25:09.905918  995603 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 22:25:09.905978  995603 kubeadm.go:322] [certs] Using the existing "sa" key
	I0830 22:25:09.906052  995603 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:25:10.141265  995603 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:25:10.238428  995603 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:25:10.387118  995603 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:25:10.620307  995603 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:25:10.625802  995603 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:25:10.627926  995603 out.go:204]   - Booting up control plane ...
	I0830 22:25:10.629866  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:25:10.635839  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:25:10.638800  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:25:10.641079  995603 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:25:10.666312  995603 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:25:20.671894  995603 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004868 seconds
	I0830 22:25:20.672078  995603 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:25:20.687003  995603 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:25:21.215417  995603 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:25:21.215657  995603 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-250163 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0830 22:25:21.726398  995603 kubeadm.go:322] [bootstrap-token] Using token: y3ik1i.subqwfsto1ck6o9y
	I0830 22:25:21.728095  995603 out.go:204]   - Configuring RBAC rules ...
	I0830 22:25:21.728243  995603 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:25:21.735828  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:25:21.741247  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:25:21.744588  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:25:21.747966  995603 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:25:21.835002  995603 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:25:22.157106  995603 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:25:22.157129  995603 kubeadm.go:322] 
	I0830 22:25:22.157207  995603 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:25:22.157221  995603 kubeadm.go:322] 
	I0830 22:25:22.157343  995603 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:25:22.157373  995603 kubeadm.go:322] 
	I0830 22:25:22.157410  995603 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:25:22.157493  995603 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:25:22.157572  995603 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:25:22.157581  995603 kubeadm.go:322] 
	I0830 22:25:22.157661  995603 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:25:22.157779  995603 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:25:22.157877  995603 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:25:22.157894  995603 kubeadm.go:322] 
	I0830 22:25:22.158002  995603 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0830 22:25:22.158104  995603 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:25:22.158119  995603 kubeadm.go:322] 
	I0830 22:25:22.158250  995603 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y3ik1i.subqwfsto1ck6o9y \
	I0830 22:25:22.158415  995603 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:25:22.158457  995603 kubeadm.go:322]     --control-plane 	  
	I0830 22:25:22.158467  995603 kubeadm.go:322] 
	I0830 22:25:22.158555  995603 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:25:22.158566  995603 kubeadm.go:322] 
	I0830 22:25:22.158674  995603 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y3ik1i.subqwfsto1ck6o9y \
	I0830 22:25:22.158820  995603 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:25:22.159148  995603 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:25:22.159192  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:25:22.159205  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:25:22.160970  995603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:25:22.162353  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:25:22.173835  995603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:25:22.192193  995603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:25:22.192332  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=old-k8s-version-250163 minikube.k8s.io/updated_at=2023_08_30T22_25_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.192335  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.440832  995603 ops.go:34] apiserver oom_adj: -16
	I0830 22:25:22.441067  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.560349  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:23.171762  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:23.671955  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:24.171789  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:24.671863  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:25.172176  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:25.672262  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:26.172348  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:26.672680  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:27.171856  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:27.671722  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:28.171712  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:28.671959  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:29.171914  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:29.672320  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:30.171688  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:30.671958  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:31.172481  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:31.672528  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:32.172583  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:32.672562  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:33.171839  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:33.672125  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:34.172515  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:34.672643  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:35.172469  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:35.672444  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:36.171897  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:36.672260  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:37.171900  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:37.332591  995603 kubeadm.go:1081] duration metric: took 15.140354535s to wait for elevateKubeSystemPrivileges.
	I0830 22:25:37.332635  995603 kubeadm.go:406] StartCluster complete in 6m2.391789918s
	I0830 22:25:37.332659  995603 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:25:37.332770  995603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:25:37.334722  995603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:25:37.334991  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:25:37.335087  995603 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:25:37.335217  995603 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335241  995603 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-250163"
	W0830 22:25:37.335253  995603 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:25:37.335313  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.335317  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:25:37.335322  995603 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335342  995603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-250163"
	I0830 22:25:37.335345  995603 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335380  995603 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-250163"
	W0830 22:25:37.335391  995603 addons.go:240] addon metrics-server should already be in state true
	I0830 22:25:37.335440  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.335753  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335807  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.335807  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335847  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.335810  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335939  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.355619  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I0830 22:25:37.355760  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43941
	I0830 22:25:37.355979  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0830 22:25:37.356166  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356203  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356595  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356729  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.356748  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.356730  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.356793  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.357097  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.357114  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.357170  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357177  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357383  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.357486  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357825  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.357857  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.358246  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.358292  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.373639  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I0830 22:25:37.374107  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.374639  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.374657  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.375035  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.375359  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.377439  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.379303  995603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:25:37.378176  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0830 22:25:37.380617  995603 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-250163"
	W0830 22:25:37.380661  995603 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:25:37.380706  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.380787  995603 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:25:37.380802  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:25:37.380826  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.381081  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.381123  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.381726  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.382284  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.382304  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.382656  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.382878  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.384791  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.387018  995603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:25:37.385098  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.385806  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.388841  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.388863  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.388865  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:25:37.388883  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:25:37.388907  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.389015  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.389121  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.389274  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.392059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.392538  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.392557  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.392720  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.392861  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.392989  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.393101  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.399504  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I0830 22:25:37.399592  995603 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-250163" context rescaled to 1 replicas
	I0830 22:25:37.399627  995603 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:25:37.401322  995603 out.go:177] * Verifying Kubernetes components...
	I0830 22:25:37.400205  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.402915  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:25:37.403460  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.403485  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.403872  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.404488  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.404537  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.420598  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0830 22:25:37.421352  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.422218  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.422240  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.422714  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.422979  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.424750  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.425396  995603 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:25:37.425415  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:25:37.425439  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.428142  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.428731  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.428762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.428900  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.429077  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.429330  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.429469  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.705452  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:25:37.713345  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:25:37.736333  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:25:37.736356  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:25:37.825018  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:25:37.825051  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:25:37.858566  995603 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-250163" to be "Ready" ...
	I0830 22:25:37.858657  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:25:37.888050  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:25:37.888082  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:25:37.901662  995603 node_ready.go:49] node "old-k8s-version-250163" has status "Ready":"True"
	I0830 22:25:37.901689  995603 node_ready.go:38] duration metric: took 43.090996ms waiting for node "old-k8s-version-250163" to be "Ready" ...
	I0830 22:25:37.901701  995603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:25:37.928785  995603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:37.960479  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:25:39.232573  995603 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mx7ff" not found
	I0830 22:25:39.232603  995603 pod_ready.go:81] duration metric: took 1.303781463s waiting for pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace to be "Ready" ...
	E0830 22:25:39.232616  995603 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mx7ff" not found
	I0830 22:25:39.232630  995603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:39.305932  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.600438988s)
	I0830 22:25:39.306003  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306018  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306031  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592647384s)
	I0830 22:25:39.306084  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306106  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306088  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.447402831s)
	I0830 22:25:39.306222  995603 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0830 22:25:39.306459  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306481  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306485  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306512  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306518  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306534  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306517  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306608  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306628  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306638  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306862  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306903  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306911  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306946  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306972  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306981  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306993  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.307001  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.307338  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.307387  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.307407  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.425740  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.465201154s)
	I0830 22:25:39.425823  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.425844  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.426221  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.426260  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.426272  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.426289  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.426311  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.426584  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.426620  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.426638  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.426657  995603 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-250163"
	I0830 22:25:39.428628  995603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:25:39.430476  995603 addons.go:502] enable addons completed in 2.095405793s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:25:40.785067  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace has status "Ready":"True"
	I0830 22:25:40.785090  995603 pod_ready.go:81] duration metric: took 1.552452887s waiting for pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.785100  995603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-866k8" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.790132  995603 pod_ready.go:92] pod "kube-proxy-866k8" in "kube-system" namespace has status "Ready":"True"
	I0830 22:25:40.790158  995603 pod_ready.go:81] duration metric: took 5.051684ms waiting for pod "kube-proxy-866k8" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.790173  995603 pod_ready.go:38] duration metric: took 2.888452893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:25:40.790199  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:25:40.790247  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:25:40.805458  995603 api_server.go:72] duration metric: took 3.405792506s to wait for apiserver process to appear ...
	I0830 22:25:40.805488  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:25:40.805512  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:25:40.812389  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0830 22:25:40.813455  995603 api_server.go:141] control plane version: v1.16.0
	I0830 22:25:40.813483  995603 api_server.go:131] duration metric: took 7.983448ms to wait for apiserver health ...
	I0830 22:25:40.813520  995603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:25:40.818720  995603 system_pods.go:59] 4 kube-system pods found
	I0830 22:25:40.818741  995603 system_pods.go:61] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:40.818746  995603 system_pods.go:61] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:40.818754  995603 system_pods.go:61] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:40.818763  995603 system_pods.go:61] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:40.818768  995603 system_pods.go:74] duration metric: took 5.239623ms to wait for pod list to return data ...
	I0830 22:25:40.818776  995603 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:25:40.821982  995603 default_sa.go:45] found service account: "default"
	I0830 22:25:40.822001  995603 default_sa.go:55] duration metric: took 3.215755ms for default service account to be created ...
	I0830 22:25:40.822010  995603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:25:40.824823  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:40.824844  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:40.824850  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:40.824860  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:40.824871  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:40.824896  995603 retry.go:31] will retry after 244.703972ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.075793  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.075829  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.075838  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.075849  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.075860  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.075886  995603 retry.go:31] will retry after 325.650304ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.407202  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.407234  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.407242  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.407252  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.407262  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.407313  995603 retry.go:31] will retry after 449.708915ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.862007  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.862038  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.862043  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.862061  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.862070  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.862086  995603 retry.go:31] will retry after 484.451835ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:42.351597  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:42.351637  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:42.351646  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:42.351656  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:42.351664  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:42.351680  995603 retry.go:31] will retry after 739.711019ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:43.096340  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:43.096365  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:43.096371  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:43.096380  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:43.096387  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:43.096402  995603 retry.go:31] will retry after 871.763135ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:43.974914  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:43.974947  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:43.974954  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:43.974964  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:43.974973  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:43.974994  995603 retry.go:31] will retry after 1.11275286s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:45.093268  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:45.093293  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:45.093299  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:45.093306  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:45.093313  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:45.093329  995603 retry.go:31] will retry after 1.015840649s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:46.114920  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:46.114954  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:46.114961  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:46.114972  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:46.114982  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:46.115002  995603 retry.go:31] will retry after 1.822388925s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:47.942838  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:47.942870  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:47.942877  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:47.942887  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:47.942900  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:47.942920  995603 retry.go:31] will retry after 1.516432463s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:49.464430  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:49.464460  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:49.464465  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:49.464473  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:49.464480  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:49.464496  995603 retry.go:31] will retry after 2.558675876s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:52.028440  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:52.028469  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:52.028474  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:52.028481  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:52.028488  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:52.028503  995603 retry.go:31] will retry after 2.801664105s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:54.835174  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:54.835200  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:54.835205  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:54.835212  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:54.835219  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:54.835243  995603 retry.go:31] will retry after 3.386411543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:58.228062  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:58.228104  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:58.228113  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:58.228123  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:58.228136  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:58.228158  995603 retry.go:31] will retry after 5.58749509s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:03.822486  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:26:03.822511  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:03.822516  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:03.822523  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:03.822530  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:03.822548  995603 retry.go:31] will retry after 6.26222599s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:10.092537  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:26:10.092563  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:10.092569  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:10.092576  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:10.092582  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:10.092599  995603 retry.go:31] will retry after 6.680813015s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:16.780093  995603 system_pods.go:86] 5 kube-system pods found
	I0830 22:26:16.780120  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:16.780125  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Pending
	I0830 22:26:16.780130  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:16.780138  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:16.780145  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:16.780161  995603 retry.go:31] will retry after 9.963152707s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:26.749177  995603 system_pods.go:86] 7 kube-system pods found
	I0830 22:26:26.749205  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:26.749211  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Running
	I0830 22:26:26.749215  995603 system_pods.go:89] "kube-controller-manager-old-k8s-version-250163" [dfb636c2-5a87-4d9a-97c0-2fd763d52b69] Running
	I0830 22:26:26.749219  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:26.749223  995603 system_pods.go:89] "kube-scheduler-old-k8s-version-250163" [9d0c93a7-5cad-4a40-9d3d-3b828e33dca9] Pending
	I0830 22:26:26.749230  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:26.749237  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:26.749252  995603 retry.go:31] will retry after 8.744971537s: missing components: etcd, kube-scheduler
	I0830 22:26:35.500731  995603 system_pods.go:86] 8 kube-system pods found
	I0830 22:26:35.500759  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:35.500765  995603 system_pods.go:89] "etcd-old-k8s-version-250163" [260642d3-280e-4ae1-97bc-d15a904b3205] Running
	I0830 22:26:35.500769  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Running
	I0830 22:26:35.500775  995603 system_pods.go:89] "kube-controller-manager-old-k8s-version-250163" [dfb636c2-5a87-4d9a-97c0-2fd763d52b69] Running
	I0830 22:26:35.500779  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:35.500783  995603 system_pods.go:89] "kube-scheduler-old-k8s-version-250163" [9d0c93a7-5cad-4a40-9d3d-3b828e33dca9] Running
	I0830 22:26:35.500789  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:35.500796  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:35.500813  995603 system_pods.go:126] duration metric: took 54.67879848s to wait for k8s-apps to be running ...
	I0830 22:26:35.500827  995603 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:26:35.500876  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:26:35.519861  995603 system_svc.go:56] duration metric: took 19.021631ms WaitForService to wait for kubelet.
	I0830 22:26:35.519900  995603 kubeadm.go:581] duration metric: took 58.120243521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:26:35.519985  995603 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:26:35.524455  995603 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:26:35.524486  995603 node_conditions.go:123] node cpu capacity is 2
	I0830 22:26:35.524537  995603 node_conditions.go:105] duration metric: took 4.543152ms to run NodePressure ...
	I0830 22:26:35.524550  995603 start.go:228] waiting for startup goroutines ...
	I0830 22:26:35.524562  995603 start.go:233] waiting for cluster config update ...
	I0830 22:26:35.524573  995603 start.go:242] writing updated cluster config ...
	I0830 22:26:35.524938  995603 ssh_runner.go:195] Run: rm -f paused
	I0830 22:26:35.578723  995603 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0830 22:26:35.580954  995603 out.go:177] 
	W0830 22:26:35.582332  995603 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0830 22:26:35.583700  995603 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0830 22:26:35.585290  995603 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-250163" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:18:37 UTC, ends at Wed 2023-08-30 22:30:35 UTC. --
	Aug 30 22:18:39 minikube systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 30 22:18:39 minikube systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 30 22:18:44 embed-certs-208903 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 30 22:18:44 embed-certs-208903 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 30 22:19:51 embed-certs-208903 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 30 22:19:51 embed-certs-208903 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	
	* 
	* ==> container status <==
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug30 22:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072921] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.305428] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.387854] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153721] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.490379] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	
	* 
	* ==> kernel <==
	*  22:30:42 up 12 min,  0 users,  load average: 0.07, 0.02, 0.00
	Linux embed-certs-208903 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:18:37 UTC, ends at Wed 2023-08-30 22:30:42 UTC. --
	-- No entries --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:29:53.738549  998696 logs.go:281] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:29:47Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:29:49Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:29:51Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:29:53Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:29:59.764622  998696 logs.go:281] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:29:53Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:29:55Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:29:57Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:29:59Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:30:05.796957  998696 logs.go:281] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:29:59Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:30:01Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:03Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:05Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:30:11.831648  998696 logs.go:281] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:30:05Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:30:07Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:09Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:11Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:30:17.860237  998696 logs.go:281] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:30:11Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:30:13Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:15Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:17Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:30:23.890698  998696 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:30:17Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:30:19Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:21Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:23Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:30:29.925357  998696 logs.go:281] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:30:23Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:30:25Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:27Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:29Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:30:35.956579  998696 logs.go:281] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:30:29Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:30:31Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:33Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:35Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:30:42.045138  998696 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:30:36Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:30:38Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:40Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:30:42Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2023-08-30T22:30:36Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\ntime=\"2023-08-30T22:30:38Z\" level=error msg=\"connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded\"\ntime=\"2023-08-30T22:30:40Z\" level=error msg=\"connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded\"\ntime=\"2023-08-30T22:30:42Z\" level=fatal msg=\"connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /st
derr **"
	E0830 22:30:42.143581  998696 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0830 22:30:42.125436     655 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:30:42.125703     655 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:30:42.127577     655 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:30:42.128972     655 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:30:42.130403     655 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nE0830 22:30:42.125436     655 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:30:42.125703     655 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:30:42.127577     655 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:30:42.128972     655 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:30:42.130403     655 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nThe connection to the s
erver localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208903 -n embed-certs-208903
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208903 -n embed-certs-208903: exit status 2 (274.208803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "embed-certs-208903" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (596.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-08-30 22:33:44.958799636 +0000 UTC m=+5067.151549569
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-791007 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-791007 logs -n 25: (1.423819314s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-519738 -- sudo                         | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-519738                                 | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-184733                              | stopped-upgrade-184733       | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:09 UTC |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	| delete  | -p                                                     | disable-driver-mounts-883991 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | disable-driver-mounts-883991                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:12 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-698195             | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208903            | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-791007  | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC | 30 Aug 23 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC |                     |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-698195                  | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208903                 | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC | 30 Aug 23 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-250163        | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC | 30 Aug 23 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-791007       | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC | 30 Aug 23 22:24 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-250163             | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC | 30 Aug 23 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:16:59
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:16:59.758341  995603 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:16:59.758470  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758479  995603 out.go:309] Setting ErrFile to fd 2...
	I0830 22:16:59.758484  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758692  995603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:16:59.759241  995603 out.go:303] Setting JSON to false
	I0830 22:16:59.760232  995603 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14367,"bootTime":1693419453,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:16:59.760291  995603 start.go:138] virtualization: kvm guest
	I0830 22:16:59.762744  995603 out.go:177] * [old-k8s-version-250163] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:16:59.764395  995603 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:16:59.765863  995603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:16:59.764404  995603 notify.go:220] Checking for updates...
	I0830 22:16:59.767579  995603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:16:59.769244  995603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:16:59.771001  995603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:16:59.772625  995603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:16:59.774574  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:16:59.774929  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.775032  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.790271  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0830 22:16:59.790677  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.791257  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.791283  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.791645  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.791879  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.793885  995603 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:16:59.795414  995603 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:16:59.795716  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.795752  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.810316  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0830 22:16:59.810694  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.811176  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.811201  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.811560  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.811808  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.845962  995603 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:16:59.847399  995603 start.go:298] selected driver: kvm2
	I0830 22:16:59.847410  995603 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.847546  995603 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:16:59.848301  995603 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.848376  995603 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:16:59.862654  995603 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:16:59.863040  995603 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 22:16:59.863080  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:16:59.863094  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:16:59.863109  995603 start_flags.go:319] config:
	{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.863341  995603 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.865500  995603 out.go:177] * Starting control plane node old-k8s-version-250163 in cluster old-k8s-version-250163
	I0830 22:17:00.916070  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:16:59.866763  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:16:59.866836  995603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0830 22:16:59.866852  995603 cache.go:57] Caching tarball of preloaded images
	I0830 22:16:59.866935  995603 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:16:59.866946  995603 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0830 22:16:59.867091  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:16:59.867314  995603 start.go:365] acquiring machines lock for old-k8s-version-250163: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:17:06.996025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:10.068036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:16.148043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:19.220024  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:25.300036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:28.372088  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:34.452043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:37.524037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:43.604037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:46.676107  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:52.756100  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:55.828195  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:01.908025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:04.980079  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:11.060035  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:14.132025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:20.212050  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:23.283995  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:26.288205  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 4m29.4670209s
	I0830 22:18:26.288261  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:26.288276  994705 fix.go:54] fixHost starting: 
	I0830 22:18:26.288621  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:26.288656  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:26.304048  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0830 22:18:26.304613  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:26.305138  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:18:26.305164  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:26.305518  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:26.305719  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:26.305843  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:18:26.307597  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Stopped err=<nil>
	I0830 22:18:26.307639  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	W0830 22:18:26.307827  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:26.309985  994705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208903" ...
	I0830 22:18:26.311551  994705 main.go:141] libmachine: (embed-certs-208903) Calling .Start
	I0830 22:18:26.311750  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring networks are active...
	I0830 22:18:26.312528  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network default is active
	I0830 22:18:26.312814  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network mk-embed-certs-208903 is active
	I0830 22:18:26.313153  994705 main.go:141] libmachine: (embed-certs-208903) Getting domain xml...
	I0830 22:18:26.313857  994705 main.go:141] libmachine: (embed-certs-208903) Creating domain...
	I0830 22:18:26.285881  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:26.285939  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:18:26.288013  994624 machine.go:91] provisioned docker machine in 4m37.410947228s
	I0830 22:18:26.288063  994624 fix.go:56] fixHost completed within 4m37.432260867s
	I0830 22:18:26.288085  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 4m37.432330775s
	W0830 22:18:26.288107  994624 start.go:672] error starting host: provision: host is not running
	W0830 22:18:26.288219  994624 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0830 22:18:26.288225  994624 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:27.529120  994705 main.go:141] libmachine: (embed-certs-208903) Waiting to get IP...
	I0830 22:18:27.530028  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.530390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.530515  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.530404  996319 retry.go:31] will retry after 311.351139ms: waiting for machine to come up
	I0830 22:18:27.843013  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.843398  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.843427  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.843337  996319 retry.go:31] will retry after 367.953943ms: waiting for machine to come up
	I0830 22:18:28.213214  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.213785  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.213820  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.213722  996319 retry.go:31] will retry after 424.275825ms: waiting for machine to come up
	I0830 22:18:28.639216  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.639670  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.639707  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.639609  996319 retry.go:31] will retry after 502.321201ms: waiting for machine to come up
	I0830 22:18:29.143240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.143823  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.143850  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.143790  996319 retry.go:31] will retry after 680.495047ms: waiting for machine to come up
	I0830 22:18:29.825462  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.825879  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.825904  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.825836  996319 retry.go:31] will retry after 756.63617ms: waiting for machine to come up
	I0830 22:18:30.583723  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:30.584179  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:30.584212  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:30.584118  996319 retry.go:31] will retry after 851.722792ms: waiting for machine to come up
	I0830 22:18:31.437603  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:31.438031  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:31.438063  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:31.437986  996319 retry.go:31] will retry after 1.214893807s: waiting for machine to come up
	I0830 22:18:31.289961  994624 start.go:365] acquiring machines lock for no-preload-698195: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:32.654351  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:32.654803  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:32.654829  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:32.654756  996319 retry.go:31] will retry after 1.574180335s: waiting for machine to come up
	I0830 22:18:34.231491  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:34.231911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:34.231944  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:34.231826  996319 retry.go:31] will retry after 1.99107048s: waiting for machine to come up
	I0830 22:18:36.225911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:36.226336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:36.226363  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:36.226283  996319 retry.go:31] will retry after 1.816508761s: waiting for machine to come up
	I0830 22:18:38.044672  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:38.045061  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:38.045094  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:38.045021  996319 retry.go:31] will retry after 2.343148299s: waiting for machine to come up
	I0830 22:18:40.389346  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:40.389753  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:40.389778  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:40.389700  996319 retry.go:31] will retry after 3.682098761s: waiting for machine to come up
	I0830 22:18:45.025750  995192 start.go:369] acquired machines lock for "default-k8s-diff-port-791007" in 3m32.939054887s
	I0830 22:18:45.025823  995192 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:45.025847  995192 fix.go:54] fixHost starting: 
	I0830 22:18:45.026291  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:45.026333  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:45.041161  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0830 22:18:45.041657  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:45.042176  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:18:45.042208  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:45.042544  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:45.042748  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:18:45.042910  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:18:45.044428  995192 fix.go:102] recreateIfNeeded on default-k8s-diff-port-791007: state=Stopped err=<nil>
	I0830 22:18:45.044454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	W0830 22:18:45.044615  995192 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:45.046538  995192 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-791007" ...
	I0830 22:18:44.074916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075386  994705 main.go:141] libmachine: (embed-certs-208903) Found IP for machine: 192.168.50.159
	I0830 22:18:44.075411  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has current primary IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075418  994705 main.go:141] libmachine: (embed-certs-208903) Reserving static IP address...
	I0830 22:18:44.075899  994705 main.go:141] libmachine: (embed-certs-208903) Reserved static IP address: 192.168.50.159
	I0830 22:18:44.075928  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.075939  994705 main.go:141] libmachine: (embed-certs-208903) Waiting for SSH to be available...
	I0830 22:18:44.075959  994705 main.go:141] libmachine: (embed-certs-208903) DBG | skip adding static IP to network mk-embed-certs-208903 - found existing host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"}
	I0830 22:18:44.075968  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Getting to WaitForSSH function...
	I0830 22:18:44.078068  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.078436  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH client type: external
	I0830 22:18:44.078533  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa (-rw-------)
	I0830 22:18:44.078569  994705 main.go:141] libmachine: (embed-certs-208903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:18:44.078590  994705 main.go:141] libmachine: (embed-certs-208903) DBG | About to run SSH command:
	I0830 22:18:44.078622  994705 main.go:141] libmachine: (embed-certs-208903) DBG | exit 0
	I0830 22:18:44.167514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | SSH cmd err, output: <nil>: 
	I0830 22:18:44.167898  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetConfigRaw
	I0830 22:18:44.168594  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.170974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.171370  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171696  994705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/embed-certs-208903/config.json ...
	I0830 22:18:44.171967  994705 machine.go:88] provisioning docker machine ...
	I0830 22:18:44.171989  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:44.172184  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172371  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:18:44.172397  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172563  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.174522  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174861  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.174894  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174988  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.175159  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175286  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175413  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.175627  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.176111  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.176132  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:18:44.309192  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:18:44.309225  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.311931  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312327  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.312362  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312512  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.312727  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.312919  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.313048  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.313215  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.313623  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.313638  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:18:44.440529  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:44.440594  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:18:44.440641  994705 buildroot.go:174] setting up certificates
	I0830 22:18:44.440653  994705 provision.go:83] configureAuth start
	I0830 22:18:44.440663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.440943  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.443289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443663  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.443705  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443805  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.445987  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446297  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.446328  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446462  994705 provision.go:138] copyHostCerts
	I0830 22:18:44.446524  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:18:44.446550  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:18:44.446638  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:18:44.446750  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:18:44.446763  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:18:44.446800  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:18:44.446907  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:18:44.446919  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:18:44.446955  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:18:44.447036  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:18:44.664313  994705 provision.go:172] copyRemoteCerts
	I0830 22:18:44.664387  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:18:44.664434  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.666819  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667160  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.667192  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667338  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.667565  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.667687  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.667839  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:18:44.756922  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:18:44.780430  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:18:44.803396  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:18:44.825975  994705 provision.go:86] duration metric: configureAuth took 385.307932ms
	I0830 22:18:44.826006  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:18:44.826230  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:18:44.826334  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.828862  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829199  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.829240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829383  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.829606  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829770  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829907  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.830104  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.830593  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.830615  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:18:45.025539  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025585  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:18:45.025596  994705 machine.go:91] provisioned docker machine in 853.613637ms
	I0830 22:18:45.025627  994705 fix.go:56] fixHost completed within 18.737351046s
	I0830 22:18:45.025637  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 18.737393499s
	W0830 22:18:45.025662  994705 start.go:672] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	W0830 22:18:45.025746  994705 out.go:239] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025760  994705 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:45.047821  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Start
	I0830 22:18:45.047982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring networks are active...
	I0830 22:18:45.048684  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network default is active
	I0830 22:18:45.049040  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network mk-default-k8s-diff-port-791007 is active
	I0830 22:18:45.049401  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Getting domain xml...
	I0830 22:18:45.050009  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Creating domain...
	I0830 22:18:46.288943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting to get IP...
	I0830 22:18:46.289982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290359  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.290388  996430 retry.go:31] will retry after 228.105709ms: waiting for machine to come up
	I0830 22:18:46.519862  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520369  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.520342  996430 retry.go:31] will retry after 343.008473ms: waiting for machine to come up
	I0830 22:18:46.865023  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865426  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.865385  996430 retry.go:31] will retry after 467.017605ms: waiting for machine to come up
	I0830 22:18:50.028247  994705 start.go:365] acquiring machines lock for embed-certs-208903: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:47.334027  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334655  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.334600  996430 retry.go:31] will retry after 601.952764ms: waiting for machine to come up
	I0830 22:18:47.937980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.938387  996430 retry.go:31] will retry after 556.18277ms: waiting for machine to come up
	I0830 22:18:48.495747  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496130  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:48.496101  996430 retry.go:31] will retry after 696.126701ms: waiting for machine to come up
	I0830 22:18:49.193405  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193789  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193822  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:49.193752  996430 retry.go:31] will retry after 1.123021492s: waiting for machine to come up
	I0830 22:18:50.318326  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318710  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:50.318637  996430 retry.go:31] will retry after 1.198520166s: waiting for machine to come up
	I0830 22:18:51.518959  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519302  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519332  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:51.519244  996430 retry.go:31] will retry after 1.851352392s: waiting for machine to come up
	I0830 22:18:53.373208  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373676  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373713  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:53.373594  996430 retry.go:31] will retry after 1.789163964s: waiting for machine to come up
	I0830 22:18:55.164132  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164664  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:55.164587  996430 retry.go:31] will retry after 2.037803279s: waiting for machine to come up
	I0830 22:18:57.204503  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204957  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204984  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:57.204919  996430 retry.go:31] will retry after 3.365492251s: waiting for machine to come up
	I0830 22:19:00.572195  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:19:00.572533  996430 retry.go:31] will retry after 3.57478782s: waiting for machine to come up
	I0830 22:19:05.536665  995603 start.go:369] acquired machines lock for "old-k8s-version-250163" in 2m5.669275373s
	I0830 22:19:05.536730  995603 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:05.536751  995603 fix.go:54] fixHost starting: 
	I0830 22:19:05.537197  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:05.537240  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:05.556581  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0830 22:19:05.557016  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:05.557559  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:19:05.557590  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:05.557937  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:05.558124  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:05.558290  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:19:05.559829  995603 fix.go:102] recreateIfNeeded on old-k8s-version-250163: state=Stopped err=<nil>
	I0830 22:19:05.559871  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	W0830 22:19:05.560056  995603 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:05.562726  995603 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-250163" ...
	I0830 22:19:04.151280  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.151787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Found IP for machine: 192.168.61.104
	I0830 22:19:04.151820  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserving static IP address...
	I0830 22:19:04.151839  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has current primary IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.152254  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.152286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserved static IP address: 192.168.61.104
	I0830 22:19:04.152306  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | skip adding static IP to network mk-default-k8s-diff-port-791007 - found existing host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"}
	I0830 22:19:04.152324  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for SSH to be available...
	I0830 22:19:04.152339  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Getting to WaitForSSH function...
	I0830 22:19:04.154335  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.154701  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154791  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH client type: external
	I0830 22:19:04.154833  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa (-rw-------)
	I0830 22:19:04.154852  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:04.154868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | About to run SSH command:
	I0830 22:19:04.154879  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | exit 0
	I0830 22:19:04.251692  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:04.252182  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetConfigRaw
	I0830 22:19:04.252842  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.255184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255536  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.255571  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255850  995192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/config.json ...
	I0830 22:19:04.256118  995192 machine.go:88] provisioning docker machine ...
	I0830 22:19:04.256143  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:04.256344  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256504  995192 buildroot.go:166] provisioning hostname "default-k8s-diff-port-791007"
	I0830 22:19:04.256525  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256653  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.259010  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259366  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.259389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259509  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.259667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.260115  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.260787  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.260810  995192 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-791007 && echo "default-k8s-diff-port-791007" | sudo tee /etc/hostname
	I0830 22:19:04.403123  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-791007
	
	I0830 22:19:04.403166  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.405835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406219  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.406270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406476  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.406704  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.406892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.407047  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.407233  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.407634  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.407658  995192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-791007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-791007/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-791007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:04.549964  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:04.550002  995192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:04.550039  995192 buildroot.go:174] setting up certificates
	I0830 22:19:04.550053  995192 provision.go:83] configureAuth start
	I0830 22:19:04.550071  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.550422  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.552844  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553116  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.553150  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553313  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.555514  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.555880  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.555917  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.556036  995192 provision.go:138] copyHostCerts
	I0830 22:19:04.556100  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:04.556133  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:04.556213  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:04.556343  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:04.556354  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:04.556392  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:04.556485  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:04.556496  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:04.556528  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:04.556607  995192 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-791007 san=[192.168.61.104 192.168.61.104 localhost 127.0.0.1 minikube default-k8s-diff-port-791007]
	I0830 22:19:04.756354  995192 provision.go:172] copyRemoteCerts
	I0830 22:19:04.756413  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:04.756438  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.759134  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759511  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.759544  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759739  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.759977  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.760153  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.760297  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:04.858949  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:04.882455  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0830 22:19:04.905659  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:04.929876  995192 provision.go:86] duration metric: configureAuth took 379.794026ms
	I0830 22:19:04.929905  995192 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:04.930124  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:04.930228  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.932799  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933159  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.933192  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933316  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.933531  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933703  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.934015  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.934606  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.934633  995192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:05.266317  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:05.266349  995192 machine.go:91] provisioned docker machine in 1.010213866s
	I0830 22:19:05.266363  995192 start.go:300] post-start starting for "default-k8s-diff-port-791007" (driver="kvm2")
	I0830 22:19:05.266378  995192 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:05.266402  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.266764  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:05.266802  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.269938  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270300  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.270345  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270472  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.270650  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.270800  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.270922  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.365334  995192 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:05.369583  995192 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:05.369608  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:05.369701  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:05.369790  995192 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:05.369879  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:05.377933  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:05.401027  995192 start.go:303] post-start completed in 134.648062ms
	I0830 22:19:05.401051  995192 fix.go:56] fixHost completed within 20.37520461s
	I0830 22:19:05.401079  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.404156  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.404629  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404765  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.404960  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405138  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405260  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.405463  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:05.405917  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:05.405930  995192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:05.536449  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433945.485000324
	
	I0830 22:19:05.536479  995192 fix.go:206] guest clock: 1693433945.485000324
	I0830 22:19:05.536490  995192 fix.go:219] Guest: 2023-08-30 22:19:05.485000324 +0000 UTC Remote: 2023-08-30 22:19:05.401056033 +0000 UTC m=+233.468479321 (delta=83.944291ms)
	I0830 22:19:05.536524  995192 fix.go:190] guest clock delta is within tolerance: 83.944291ms
	I0830 22:19:05.536535  995192 start.go:83] releasing machines lock for "default-k8s-diff-port-791007", held for 20.510742441s
	I0830 22:19:05.536569  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.536868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:05.539651  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540017  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.540057  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540196  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540737  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540911  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540975  995192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:05.541036  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.541133  995192 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:05.541172  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.543846  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.543892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544250  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544317  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544411  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544540  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544627  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544707  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544792  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544865  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544926  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.544972  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.677442  995192 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:05.683243  995192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:05.832776  995192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:05.838924  995192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:05.839000  995192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:05.857231  995192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:05.857251  995192 start.go:466] detecting cgroup driver to use...
	I0830 22:19:05.857349  995192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:05.875107  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:05.888540  995192 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:05.888603  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:05.901129  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:05.914011  995192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:06.015763  995192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:06.144950  995192 docker.go:212] disabling docker service ...
	I0830 22:19:06.145052  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:06.159373  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:06.172560  995192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:06.279514  995192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:06.413719  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:06.427047  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:06.443765  995192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:06.443853  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.452621  995192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:06.452690  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.461365  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.470052  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.478685  995192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:06.487763  995192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:06.495483  995192 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:06.495551  995192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:06.508009  995192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:06.516397  995192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:06.615209  995192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:06.792388  995192 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:06.792466  995192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:06.798170  995192 start.go:534] Will wait 60s for crictl version
	I0830 22:19:06.798231  995192 ssh_runner.go:195] Run: which crictl
	I0830 22:19:06.801828  995192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:06.842351  995192 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:06.842459  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.898609  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.962179  995192 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:06.963711  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:06.966803  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967189  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:06.967225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967412  995192 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:06.972033  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:05.564313  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Start
	I0830 22:19:05.564511  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring networks are active...
	I0830 22:19:05.565235  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network default is active
	I0830 22:19:05.565567  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network mk-old-k8s-version-250163 is active
	I0830 22:19:05.565954  995603 main.go:141] libmachine: (old-k8s-version-250163) Getting domain xml...
	I0830 22:19:05.566644  995603 main.go:141] libmachine: (old-k8s-version-250163) Creating domain...
	I0830 22:19:06.869485  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting to get IP...
	I0830 22:19:06.870595  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:06.871071  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:06.871133  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:06.871046  996542 retry.go:31] will retry after 294.811471ms: waiting for machine to come up
	I0830 22:19:07.167657  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.168126  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.168172  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.168099  996542 retry.go:31] will retry after 376.474639ms: waiting for machine to come up
	I0830 22:19:07.546876  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.547389  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.547419  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.547354  996542 retry.go:31] will retry after 329.757182ms: waiting for machine to come up
	I0830 22:19:07.878995  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.879572  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.879601  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.879529  996542 retry.go:31] will retry after 567.335814ms: waiting for machine to come up
	I0830 22:19:08.448373  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.448996  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.449028  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.448958  996542 retry.go:31] will retry after 510.216093ms: waiting for machine to come up
	I0830 22:19:08.960855  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.961412  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.961451  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.961326  996542 retry.go:31] will retry after 688.575912ms: waiting for machine to come up
	I0830 22:19:09.651966  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:09.652379  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:09.652411  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:09.652346  996542 retry.go:31] will retry after 1.130912238s: waiting for machine to come up
	I0830 22:19:06.984632  995192 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:06.984698  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:07.020200  995192 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:07.020282  995192 ssh_runner.go:195] Run: which lz4
	I0830 22:19:07.024254  995192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:07.028470  995192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:07.028508  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 22:19:08.986852  995192 crio.go:444] Took 1.962647 seconds to copy over tarball
	I0830 22:19:08.986915  995192 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:10.784839  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:10.785424  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:10.785456  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:10.785355  996542 retry.go:31] will retry after 898.98114ms: waiting for machine to come up
	I0830 22:19:11.685890  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:11.686614  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:11.686646  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:11.686558  996542 retry.go:31] will retry after 1.621086004s: waiting for machine to come up
	I0830 22:19:13.310234  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:13.310696  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:13.310721  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:13.310630  996542 retry.go:31] will retry after 1.652651656s: waiting for machine to come up
	I0830 22:19:12.113071  995192 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.126115747s)
	I0830 22:19:12.113107  995192 crio.go:451] Took 3.126230 seconds to extract the tarball
	I0830 22:19:12.113120  995192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:12.156320  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:12.200547  995192 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:19:12.200573  995192 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:19:12.200652  995192 ssh_runner.go:195] Run: crio config
	I0830 22:19:12.273153  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:12.273180  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:12.273205  995192 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:12.273231  995192 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-791007 NodeName:default-k8s-diff-port-791007 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:19:12.273413  995192 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-791007"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:12.273497  995192 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-791007 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0830 22:19:12.273573  995192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:19:12.283536  995192 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:12.283609  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:12.292260  995192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0830 22:19:12.309407  995192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:12.325757  995192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0830 22:19:12.342664  995192 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:12.346459  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:12.358721  995192 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007 for IP: 192.168.61.104
	I0830 22:19:12.358797  995192 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:12.359010  995192 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:12.359066  995192 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:12.359147  995192 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.key
	I0830 22:19:12.359219  995192 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key.a202b4d9
	I0830 22:19:12.359255  995192 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key
	I0830 22:19:12.359363  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:12.359390  995192 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:12.359400  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:12.359424  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:12.359449  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:12.359471  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:12.359507  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:12.360328  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:12.385275  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0830 22:19:12.410697  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:12.434240  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0830 22:19:12.457206  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:12.484695  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:12.507670  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:12.531114  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:12.554501  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:12.579425  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:12.603211  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:12.628506  995192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:12.645536  995192 ssh_runner.go:195] Run: openssl version
	I0830 22:19:12.650882  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:12.660449  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665173  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665239  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.670785  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:12.681196  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:12.690775  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695204  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695262  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.700668  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:12.710205  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:12.719691  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724744  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724803  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.730472  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:12.740194  995192 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:12.744773  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:12.750633  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:12.756228  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:12.762258  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:12.767895  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:12.773716  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:12.779716  995192 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-791007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:12.779849  995192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:12.779895  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:12.808983  995192 cri.go:89] found id: ""
	I0830 22:19:12.809055  995192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:12.818188  995192 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:12.818208  995192 kubeadm.go:636] restartCluster start
	I0830 22:19:12.818258  995192 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:12.829333  995192 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.830440  995192 kubeconfig.go:92] found "default-k8s-diff-port-791007" server: "https://192.168.61.104:8444"
	I0830 22:19:12.833172  995192 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:12.841419  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.841468  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.852072  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.852092  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.852135  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.862195  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.362894  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.362981  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.374932  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.862450  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.862558  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.874629  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.363249  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.363368  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.375071  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.862656  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.862767  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.874077  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.363282  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.363389  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.374762  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.862279  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.862375  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.873942  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.362457  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.362554  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.373922  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.862336  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.862415  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.873540  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.964585  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:14.965020  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:14.965042  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:14.964995  996542 retry.go:31] will retry after 1.89297354s: waiting for machine to come up
	I0830 22:19:16.859309  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:16.859825  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:16.859852  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:16.859777  996542 retry.go:31] will retry after 2.908196896s: waiting for machine to come up
	I0830 22:19:17.363243  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.363347  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.378177  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:17.862706  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.862785  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.877394  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.363052  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.363183  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.377397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.862918  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.862995  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.878397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.362972  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.363052  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.374591  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.863153  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.863233  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.878572  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.362613  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.362703  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.374006  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.862535  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.862634  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.874066  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.362612  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.362721  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.375262  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.863011  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.863113  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.874498  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.771969  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:19.772453  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:19.772482  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:19.772410  996542 retry.go:31] will retry after 3.967899631s: waiting for machine to come up
	I0830 22:19:23.743741  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744344  995603 main.go:141] libmachine: (old-k8s-version-250163) Found IP for machine: 192.168.39.10
	I0830 22:19:23.744371  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserving static IP address...
	I0830 22:19:23.744387  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has current primary IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744827  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.744860  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserved static IP address: 192.168.39.10
	I0830 22:19:23.744877  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | skip adding static IP to network mk-old-k8s-version-250163 - found existing host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"}
	I0830 22:19:23.744904  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Getting to WaitForSSH function...
	I0830 22:19:23.744920  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting for SSH to be available...
	I0830 22:19:23.747285  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747642  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.747676  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747864  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH client type: external
	I0830 22:19:23.747896  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa (-rw-------)
	I0830 22:19:23.747935  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:23.747954  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | About to run SSH command:
	I0830 22:19:23.747971  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | exit 0
	I0830 22:19:23.836434  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:23.837035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetConfigRaw
	I0830 22:19:23.837845  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:23.840648  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.841088  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841433  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:19:23.841663  995603 machine.go:88] provisioning docker machine ...
	I0830 22:19:23.841688  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:23.841895  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842049  995603 buildroot.go:166] provisioning hostname "old-k8s-version-250163"
	I0830 22:19:23.842069  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842291  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.844953  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845376  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.845408  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845678  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.845885  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846186  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.846361  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.846839  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.846861  995603 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-250163 && echo "old-k8s-version-250163" | sudo tee /etc/hostname
	I0830 22:19:23.981507  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-250163
	
	I0830 22:19:23.981556  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.984891  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.985249  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985369  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.985604  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.985811  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.986000  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.986199  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.986603  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.986620  995603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-250163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-250163/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-250163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:24.115894  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:24.115952  995603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:24.115985  995603 buildroot.go:174] setting up certificates
	I0830 22:19:24.115996  995603 provision.go:83] configureAuth start
	I0830 22:19:24.116014  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:24.116342  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:24.118887  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119266  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.119312  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119572  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.122166  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122551  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.122590  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122700  995603 provision.go:138] copyHostCerts
	I0830 22:19:24.122769  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:24.122793  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:24.122868  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:24.122989  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:24.123004  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:24.123035  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:24.123168  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:24.123184  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:24.123217  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:24.123302  995603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-250163 san=[192.168.39.10 192.168.39.10 localhost 127.0.0.1 minikube old-k8s-version-250163]
	I0830 22:19:24.303093  995603 provision.go:172] copyRemoteCerts
	I0830 22:19:24.303156  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:24.303182  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.305900  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306173  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.306199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306352  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.306545  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.306728  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.306873  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.393858  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:24.418791  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:19:24.441090  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:24.462926  995603 provision.go:86] duration metric: configureAuth took 346.913079ms
	I0830 22:19:24.462952  995603 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:24.463136  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:19:24.463224  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.465978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466321  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.466357  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466559  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.466785  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.466934  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.467035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.467173  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.467657  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.467676  995603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:25.058077  994624 start.go:369] acquired machines lock for "no-preload-698195" in 53.768050843s
	I0830 22:19:25.058128  994624 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:25.058141  994624 fix.go:54] fixHost starting: 
	I0830 22:19:25.058564  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:25.058603  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:25.076580  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0830 22:19:25.077082  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:25.077788  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:19:25.077824  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:25.078214  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:25.078418  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:25.078695  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:19:25.080411  994624 fix.go:102] recreateIfNeeded on no-preload-698195: state=Stopped err=<nil>
	I0830 22:19:25.080447  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	W0830 22:19:25.080636  994624 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:25.082566  994624 out.go:177] * Restarting existing kvm2 VM for "no-preload-698195" ...
	I0830 22:19:24.795523  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:24.795562  995603 machine.go:91] provisioned docker machine in 953.87669ms
	I0830 22:19:24.795575  995603 start.go:300] post-start starting for "old-k8s-version-250163" (driver="kvm2")
	I0830 22:19:24.795590  995603 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:24.795616  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:24.795984  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:24.796046  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.799136  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799534  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.799561  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799797  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.799996  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.800210  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.800396  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.890335  995603 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:24.894780  995603 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:24.894807  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:24.894890  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:24.894986  995603 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:24.895110  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:24.907259  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:24.932802  995603 start.go:303] post-start completed in 137.211475ms
	I0830 22:19:24.932829  995603 fix.go:56] fixHost completed within 19.396077949s
	I0830 22:19:24.932858  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.935762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936118  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.936160  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936310  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.936538  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936721  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936918  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.937109  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.937748  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.937767  995603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:25.057876  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433965.004650095
	
	I0830 22:19:25.057911  995603 fix.go:206] guest clock: 1693433965.004650095
	I0830 22:19:25.057924  995603 fix.go:219] Guest: 2023-08-30 22:19:25.004650095 +0000 UTC Remote: 2023-08-30 22:19:24.932833395 +0000 UTC m=+145.224486267 (delta=71.8167ms)
	I0830 22:19:25.057987  995603 fix.go:190] guest clock delta is within tolerance: 71.8167ms
	I0830 22:19:25.057998  995603 start.go:83] releasing machines lock for "old-k8s-version-250163", held for 19.521294969s
	I0830 22:19:25.058036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.058351  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:25.061325  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061749  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.061782  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061965  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062635  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062921  995603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:25.062977  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.063084  995603 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:25.063119  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.065978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066217  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066375  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066428  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066620  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066668  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066784  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066806  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.066953  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.067142  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067206  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067278  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.067389  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.181017  995603 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:25.188428  995603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:25.337310  995603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:25.346144  995603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:25.346231  995603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:25.368931  995603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:25.368966  995603 start.go:466] detecting cgroup driver to use...
	I0830 22:19:25.369048  995603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:25.383524  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:25.399296  995603 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:25.399365  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:25.416387  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:25.430426  995603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:25.552861  995603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:25.699278  995603 docker.go:212] disabling docker service ...
	I0830 22:19:25.699350  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:25.718108  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:25.736420  995603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:25.871165  995603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:25.993674  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:26.009215  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:26.027014  995603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0830 22:19:26.027122  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.038902  995603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:26.038985  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.051908  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.062635  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.073049  995603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:26.086514  995603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:26.098352  995603 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:26.098405  995603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:26.117326  995603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:26.129854  995603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:26.259656  995603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:26.476938  995603 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:26.477034  995603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:26.482773  995603 start.go:534] Will wait 60s for crictl version
	I0830 22:19:26.482841  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:26.486853  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:26.525498  995603 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:26.525595  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.585226  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.641386  995603 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0830 22:19:22.362364  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:22.362448  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:22.373701  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:22.842449  995192 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:22.842531  995192 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:22.842551  995192 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:22.842623  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:22.871557  995192 cri.go:89] found id: ""
	I0830 22:19:22.871624  995192 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:22.886295  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:22.894486  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:22.894549  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902556  995192 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902578  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.017775  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.631493  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.831074  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.923222  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.994499  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:23.994583  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.007515  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.519195  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.019167  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.519068  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.019708  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.519664  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.547751  995192 api_server.go:72] duration metric: took 2.553248139s to wait for apiserver process to appear ...
	I0830 22:19:26.547794  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:26.547816  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:25.084008  994624 main.go:141] libmachine: (no-preload-698195) Calling .Start
	I0830 22:19:25.084189  994624 main.go:141] libmachine: (no-preload-698195) Ensuring networks are active...
	I0830 22:19:25.085011  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network default is active
	I0830 22:19:25.085319  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network mk-no-preload-698195 is active
	I0830 22:19:25.085676  994624 main.go:141] libmachine: (no-preload-698195) Getting domain xml...
	I0830 22:19:25.086427  994624 main.go:141] libmachine: (no-preload-698195) Creating domain...
	I0830 22:19:26.443042  994624 main.go:141] libmachine: (no-preload-698195) Waiting to get IP...
	I0830 22:19:26.444179  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.444691  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.444784  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.444686  996676 retry.go:31] will retry after 208.17912ms: waiting for machine to come up
	I0830 22:19:26.654132  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.654621  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.654651  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.654581  996676 retry.go:31] will retry after 304.249592ms: waiting for machine to come up
	I0830 22:19:26.960205  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.960990  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.961014  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.960912  996676 retry.go:31] will retry after 342.108913ms: waiting for machine to come up
	I0830 22:19:27.304766  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.305661  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.305700  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.305602  996676 retry.go:31] will retry after 500.147687ms: waiting for machine to come up
	I0830 22:19:27.808375  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.808867  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.808884  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.808796  996676 retry.go:31] will retry after 562.543443ms: waiting for machine to come up
	I0830 22:19:28.373420  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:28.373912  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:28.373938  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:28.373863  996676 retry.go:31] will retry after 755.787662ms: waiting for machine to come up
	I0830 22:19:26.642985  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:26.646304  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646712  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:26.646773  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646957  995603 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:26.652439  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:26.667339  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:19:26.667418  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:26.703670  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:26.703750  995603 ssh_runner.go:195] Run: which lz4
	I0830 22:19:26.708087  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:26.712329  995603 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:26.712362  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0830 22:19:28.602303  995603 crio.go:444] Took 1.894253 seconds to copy over tarball
	I0830 22:19:28.602408  995603 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:30.838763  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.838807  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:30.838824  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:30.908950  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.908987  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:31.409372  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.420411  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.420480  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:31.909095  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.916778  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.916813  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:29.130983  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:29.131530  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:29.131565  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:29.131459  996676 retry.go:31] will retry after 951.657872ms: waiting for machine to come up
	I0830 22:19:30.084853  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:30.085280  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:30.085306  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:30.085247  996676 retry.go:31] will retry after 1.469099841s: waiting for machine to come up
	I0830 22:19:31.556432  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:31.556893  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:31.556918  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:31.556809  996676 retry.go:31] will retry after 1.217757948s: waiting for machine to come up
	I0830 22:19:32.775796  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:32.776120  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:32.776152  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:32.776080  996676 retry.go:31] will retry after 2.032727742s: waiting for machine to come up
	I0830 22:19:31.859924  995603 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.257478408s)
	I0830 22:19:31.859957  995603 crio.go:451] Took 3.257622 seconds to extract the tarball
	I0830 22:19:31.859970  995603 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:31.917027  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:31.965752  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:31.965777  995603 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:31.965886  995603 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.965944  995603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.965980  995603 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.965879  995603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.966084  995603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.965878  995603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.965967  995603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:31.965901  995603 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968024  995603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.968045  995603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.968079  995603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.968186  995603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.968191  995603 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.968193  995603 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968248  995603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.968766  995603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.140478  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.140975  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.157997  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0830 22:19:32.159468  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.159950  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.160033  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.161682  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.255481  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:32.261235  995603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0830 22:19:32.261291  995603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.261340  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.282724  995603 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0830 22:19:32.282781  995603 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.282854  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378268  995603 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0830 22:19:32.378372  995603 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0830 22:19:32.378417  995603 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0830 22:19:32.378507  995603 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.378551  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378377  995603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0830 22:19:32.378578  995603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0830 22:19:32.378591  995603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.378600  995603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.378295  995603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378632  995603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.378439  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378657  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.468864  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.468935  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.469002  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.469032  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.469123  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.469183  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0830 22:19:32.469184  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.563508  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0830 22:19:32.563630  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0830 22:19:32.586962  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0830 22:19:32.587044  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0830 22:19:32.587059  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0830 22:19:32.587115  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0830 22:19:32.587208  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.587265  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0830 22:19:32.592221  995603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0830 22:19:32.592246  995603 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.592300  995603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0830 22:19:34.254194  995603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.661863162s)
	I0830 22:19:34.254235  995603 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0830 22:19:34.254281  995603 cache_images.go:92] LoadImages completed in 2.288490025s
	W0830 22:19:34.254418  995603 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0830 22:19:34.254514  995603 ssh_runner.go:195] Run: crio config
	I0830 22:19:34.338842  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.338876  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.338903  995603 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:34.338929  995603 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-250163 NodeName:old-k8s-version-250163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0830 22:19:34.339134  995603 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-250163"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-250163
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.10:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:34.339240  995603 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-250163 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:19:34.339313  995603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0830 22:19:34.348990  995603 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:34.349076  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:34.358084  995603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0830 22:19:34.376989  995603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:34.396552  995603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0830 22:19:34.416666  995603 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:34.421910  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:34.436393  995603 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163 for IP: 192.168.39.10
	I0830 22:19:34.436490  995603 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:34.436717  995603 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:34.436774  995603 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:34.436867  995603 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.key
	I0830 22:19:34.436944  995603 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key.713efbbe
	I0830 22:19:34.437006  995603 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key
	I0830 22:19:34.437140  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:34.437187  995603 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:34.437203  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:34.437249  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:34.437284  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:34.437320  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:34.437388  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:34.438079  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:34.470943  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:19:34.503477  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:34.533783  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:19:34.562423  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:34.594418  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:34.625417  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:34.657444  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:34.689407  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:34.719004  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:34.745856  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:32.410110  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.418241  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.418269  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:32.910053  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.915839  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.915870  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.410086  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.488115  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.488161  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.909647  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.915252  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.915284  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.409978  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.418957  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:34.418995  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.909561  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.925400  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:19:34.938760  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:19:34.938793  995192 api_server.go:131] duration metric: took 8.390990557s to wait for apiserver health ...
	I0830 22:19:34.938804  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.938813  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.941052  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:34.942805  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:34.967544  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:34.998450  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:35.012600  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:35.012681  995192 system_pods.go:61] "coredns-5dd5756b68-992p2" [83ad338b-0338-45c3-a5ed-f772d100046b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:35.012702  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [4ed4f652-47c4-4d79-b8a8-dd0cc778bce0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:19:35.012714  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [c01b9dfc-ad6f-4348-abf0-fde4a64bfa98] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:19:35.012732  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [94cbccaf-3d5a-480c-8ee0-b8af5030909d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:19:35.012748  995192 system_pods.go:61] "kube-proxy-vckmf" [03f05466-f99b-4803-9164-233bfb9e7bb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:19:35.012760  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [2c5e190d-c93b-400a-8538-e31cc2016cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:19:35.012774  995192 system_pods.go:61] "metrics-server-57f55c9bc5-p8pp2" [4eaff1be-4258-427b-a110-47dabbffecee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:19:35.012788  995192 system_pods.go:61] "storage-provisioner" [8db3da8b-8256-405d-8d9c-79fdb6da8ab2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:19:35.012800  995192 system_pods.go:74] duration metric: took 14.324835ms to wait for pod list to return data ...
	I0830 22:19:35.012814  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:35.024186  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:35.024216  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:35.024229  995192 node_conditions.go:105] duration metric: took 11.409776ms to run NodePressure ...
	I0830 22:19:35.024284  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:35.318824  995192 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324484  995192 kubeadm.go:787] kubelet initialised
	I0830 22:19:35.324512  995192 kubeadm.go:788] duration metric: took 5.656923ms waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324525  995192 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:19:35.334137  995192 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:34.810276  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:34.810797  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:34.810836  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:34.810732  996676 retry.go:31] will retry after 2.550508742s: waiting for machine to come up
	I0830 22:19:37.364002  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:37.364550  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:37.364582  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:37.364489  996676 retry.go:31] will retry after 2.230782644s: waiting for machine to come up
	I0830 22:19:34.771235  995603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:34.787672  995603 ssh_runner.go:195] Run: openssl version
	I0830 22:19:34.793400  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:34.803208  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808108  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808166  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.814296  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:34.824791  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:34.838527  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844726  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844789  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.852442  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:34.862510  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:34.875456  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880581  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880702  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.886591  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:34.897133  995603 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:34.902292  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:34.908905  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:34.915276  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:34.921204  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:34.927878  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:34.934091  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:34.940851  995603 kubeadm.go:404] StartCluster: {Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:34.940966  995603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:34.941036  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:34.978950  995603 cri.go:89] found id: ""
	I0830 22:19:34.979038  995603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:34.988290  995603 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:34.988324  995603 kubeadm.go:636] restartCluster start
	I0830 22:19:34.988403  995603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:34.998277  995603 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:34.999385  995603 kubeconfig.go:92] found "old-k8s-version-250163" server: "https://192.168.39.10:8443"
	I0830 22:19:35.002017  995603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:35.013903  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.013962  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.028780  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.028800  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.028845  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.043243  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.543986  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.544109  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.555939  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.044164  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.044259  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.055496  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.544110  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.544243  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.555999  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.043535  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.043628  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.055019  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.543435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.543546  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.558778  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.044367  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.044482  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.058777  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.543327  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.543431  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.555133  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.043720  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.043874  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.059955  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.543461  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.543625  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.558707  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.360241  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:39.363755  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:40.357373  995192 pod_ready.go:92] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:40.357396  995192 pod_ready.go:81] duration metric: took 5.023230161s waiting for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:40.357409  995192 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:39.597197  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:39.597650  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:39.597684  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:39.597603  996676 retry.go:31] will retry after 3.562835127s: waiting for machine to come up
	I0830 22:19:43.161572  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:43.162020  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:43.162054  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:43.161973  996676 retry.go:31] will retry after 5.409514109s: waiting for machine to come up
	I0830 22:19:40.044014  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.044104  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.059377  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:40.543910  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.544012  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.555295  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.043380  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.043493  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.055443  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.544046  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.544121  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.555832  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.043785  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.043876  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.054809  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.543376  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.543463  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.554254  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.043435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.043543  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.054734  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.544308  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.544418  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.555603  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.044211  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.044291  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.055403  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.544013  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.544117  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.555197  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.378396  995192 pod_ready.go:102] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:42.881428  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.881456  995192 pod_ready.go:81] duration metric: took 2.524040213s waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.881467  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892688  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.892718  995192 pod_ready.go:81] duration metric: took 11.243576ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892731  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898434  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.898463  995192 pod_ready.go:81] duration metric: took 5.721888ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898476  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904261  995192 pod_ready.go:92] pod "kube-proxy-vckmf" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.904287  995192 pod_ready.go:81] duration metric: took 5.803127ms waiting for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904299  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153736  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:43.153763  995192 pod_ready.go:81] duration metric: took 249.454932ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153777  995192 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:45.462667  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:48.575718  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576172  994624 main.go:141] libmachine: (no-preload-698195) Found IP for machine: 192.168.72.28
	I0830 22:19:48.576206  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has current primary IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576217  994624 main.go:141] libmachine: (no-preload-698195) Reserving static IP address...
	I0830 22:19:48.576671  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.576719  994624 main.go:141] libmachine: (no-preload-698195) Reserved static IP address: 192.168.72.28
	I0830 22:19:48.576754  994624 main.go:141] libmachine: (no-preload-698195) DBG | skip adding static IP to network mk-no-preload-698195 - found existing host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"}
	I0830 22:19:48.576776  994624 main.go:141] libmachine: (no-preload-698195) DBG | Getting to WaitForSSH function...
	I0830 22:19:48.576792  994624 main.go:141] libmachine: (no-preload-698195) Waiting for SSH to be available...
	I0830 22:19:48.578953  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579261  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.579290  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579398  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH client type: external
	I0830 22:19:48.579417  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa (-rw-------)
	I0830 22:19:48.579451  994624 main.go:141] libmachine: (no-preload-698195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:48.579478  994624 main.go:141] libmachine: (no-preload-698195) DBG | About to run SSH command:
	I0830 22:19:48.579493  994624 main.go:141] libmachine: (no-preload-698195) DBG | exit 0
	I0830 22:19:48.679834  994624 main.go:141] libmachine: (no-preload-698195) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:48.680237  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetConfigRaw
	I0830 22:19:48.681064  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.683388  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.683844  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.683884  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.684153  994624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/config.json ...
	I0830 22:19:48.684435  994624 machine.go:88] provisioning docker machine ...
	I0830 22:19:48.684462  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:48.684708  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.684851  994624 buildroot.go:166] provisioning hostname "no-preload-698195"
	I0830 22:19:48.684883  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.685013  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.687508  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.687975  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.688018  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.688198  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.688413  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688599  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688830  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.689061  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.689695  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.689718  994624 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-698195 && echo "no-preload-698195" | sudo tee /etc/hostname
	I0830 22:19:45.014985  995603 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:45.015030  995603 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:45.015045  995603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:45.015102  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:45.049952  995603 cri.go:89] found id: ""
	I0830 22:19:45.050039  995603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:45.065202  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:45.074198  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:45.074330  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083407  995603 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083438  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:45.211527  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.256339  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.044735651s)
	I0830 22:19:46.256389  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.469714  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.542945  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.644533  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:46.644632  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:46.659432  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.182415  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.682613  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.182661  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.206336  995603 api_server.go:72] duration metric: took 1.561801361s to wait for apiserver process to appear ...
	I0830 22:19:48.206374  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:48.206399  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:50.136893  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 1m0.108561967s
	I0830 22:19:50.136941  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:50.136952  994705 fix.go:54] fixHost starting: 
	I0830 22:19:50.137347  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:50.137386  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:50.156678  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0830 22:19:50.157148  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:50.157739  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:19:50.157765  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:50.158103  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:50.158283  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.158445  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:19:50.160098  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Running err=<nil>
	W0830 22:19:50.160115  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:50.162162  994705 out.go:177] * Updating the running kvm2 "embed-certs-208903" VM ...
	I0830 22:19:50.163634  994705 machine.go:88] provisioning docker machine ...
	I0830 22:19:50.163663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.163906  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164077  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:19:50.164104  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164288  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.166831  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167198  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.167234  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167371  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.167561  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167731  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167902  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.168108  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.168592  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.168610  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:19:50.306738  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:19:50.306772  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.309523  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.309929  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.309974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.310182  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.310349  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310638  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310845  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.311027  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.311610  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.311644  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:50.433972  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:50.434005  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:50.434045  994705 buildroot.go:174] setting up certificates
	I0830 22:19:50.434057  994705 provision.go:83] configureAuth start
	I0830 22:19:50.434069  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.434388  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:19:50.437450  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.437883  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.437916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.438115  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.440654  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.441059  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441213  994705 provision.go:138] copyHostCerts
	I0830 22:19:50.441271  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:50.441283  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:50.441352  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:50.441453  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:50.441462  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:50.441481  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:50.441563  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:50.441575  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:50.441606  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:50.441684  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:19:50.721978  994705 provision.go:172] copyRemoteCerts
	I0830 22:19:50.722039  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:50.722072  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.724893  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725257  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.725289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725571  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.725799  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.726014  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.726181  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:19:50.817217  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:50.843335  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:50.869233  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:50.897508  994705 provision.go:86] duration metric: configureAuth took 463.432948ms
	I0830 22:19:50.897544  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:50.897804  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:50.897904  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.900633  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.901040  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901210  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.901404  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901547  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901680  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.901875  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.902287  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.902310  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:51.128816  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.128855  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:19:51.128866  994705 machine.go:91] provisioned docker machine in 965.212906ms
	I0830 22:19:51.128900  994705 fix.go:56] fixHost completed within 991.948899ms
	I0830 22:19:51.128906  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 991.990648ms
	W0830 22:19:51.129050  994705 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p embed-certs-208903" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.131823  994705 out.go:177] 
	W0830 22:19:51.133957  994705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0830 22:19:51.133985  994705 out.go:239] * 
	W0830 22:19:51.134788  994705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:19:51.136212  994705 out.go:177] 
	I0830 22:19:48.842387  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-698195
	
	I0830 22:19:48.842438  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.845727  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.846100  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.846140  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.846429  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.846658  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.846856  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.846991  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.847159  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.847578  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.847601  994624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-698195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-698195/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-698195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:48.994130  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:48.994176  994624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:48.994211  994624 buildroot.go:174] setting up certificates
	I0830 22:19:48.994244  994624 provision.go:83] configureAuth start
	I0830 22:19:48.994270  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.994612  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.997772  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.998170  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.998208  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.998416  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.001089  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.001466  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.001498  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.001639  994624 provision.go:138] copyHostCerts
	I0830 22:19:49.001702  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:49.001733  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:49.001808  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:49.001927  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:49.001937  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:49.001967  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:49.002042  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:49.002057  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:49.002085  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:49.002169  994624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.no-preload-698195 san=[192.168.72.28 192.168.72.28 localhost 127.0.0.1 minikube no-preload-698195]
	I0830 22:19:49.376465  994624 provision.go:172] copyRemoteCerts
	I0830 22:19:49.376534  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:49.376565  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.379932  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.380313  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.380354  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.380486  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.380738  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.380949  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.381109  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:49.474102  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:49.496563  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:49.518034  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:49.539392  994624 provision.go:86] duration metric: configureAuth took 545.126518ms
	I0830 22:19:49.539419  994624 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:49.539623  994624 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:49.539719  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.542336  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.542665  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.542738  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.542839  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.543026  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.543217  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.543341  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.543459  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:49.543864  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:49.543882  994624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:49.869021  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:49.869051  994624 machine.go:91] provisioned docker machine in 1.184598655s
	I0830 22:19:49.869065  994624 start.go:300] post-start starting for "no-preload-698195" (driver="kvm2")
	I0830 22:19:49.869079  994624 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:49.869110  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:49.869444  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:49.869481  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.871931  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.872288  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.872333  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.872502  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.872706  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.872888  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.873027  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:49.969286  994624 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:49.973513  994624 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:49.973532  994624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:49.973598  994624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:49.973671  994624 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:49.973768  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:49.982880  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:50.006097  994624 start.go:303] post-start completed in 137.016363ms
	I0830 22:19:50.006124  994624 fix.go:56] fixHost completed within 24.947983055s
	I0830 22:19:50.006150  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.008513  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.008880  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.008908  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.009134  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.009371  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.009560  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.009755  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.009933  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.010372  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:50.010402  994624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:50.136709  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433990.121404659
	
	I0830 22:19:50.136738  994624 fix.go:206] guest clock: 1693433990.121404659
	I0830 22:19:50.136748  994624 fix.go:219] Guest: 2023-08-30 22:19:50.121404659 +0000 UTC Remote: 2023-08-30 22:19:50.006128322 +0000 UTC m=+361.306139641 (delta=115.276337ms)
	I0830 22:19:50.136792  994624 fix.go:190] guest clock delta is within tolerance: 115.276337ms
	I0830 22:19:50.136800  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 25.078698183s
	I0830 22:19:50.136834  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.137143  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:50.139834  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.140214  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.140249  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.140387  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.140890  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.141088  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.141191  994624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:50.141243  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.141312  994624 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:50.141335  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.144030  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144283  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144434  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.144462  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144598  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.144736  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.144768  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144791  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.144912  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.144987  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.145152  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.145161  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:50.145318  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.145433  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:50.257719  994624 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:50.263507  994624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:50.411574  994624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:50.418796  994624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:50.418872  994624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:50.435922  994624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:50.435943  994624 start.go:466] detecting cgroup driver to use...
	I0830 22:19:50.436022  994624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:50.450969  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:50.463538  994624 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:50.463596  994624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:50.475797  994624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:50.488143  994624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:50.586327  994624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:50.697497  994624 docker.go:212] disabling docker service ...
	I0830 22:19:50.697587  994624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:50.712369  994624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:50.726039  994624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:50.840596  994624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:50.967799  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:50.984629  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:51.006331  994624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:51.006404  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.017150  994624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:51.017241  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.028714  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.040075  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.054520  994624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:51.067179  994624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:51.077610  994624 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:51.077685  994624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:51.093337  994624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:51.104110  994624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:51.243534  994624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:51.455149  994624 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:51.455232  994624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:51.462110  994624 start.go:534] Will wait 60s for crictl version
	I0830 22:19:51.462185  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:51.468872  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:51.509838  994624 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:51.509924  994624 ssh_runner.go:195] Run: crio --version
	I0830 22:19:51.562065  994624 ssh_runner.go:195] Run: crio --version
	I0830 22:19:51.630813  994624 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:47.961668  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:50.461541  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:51.632256  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:51.636020  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:51.636430  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:51.636458  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:51.636633  994624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:51.641003  994624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:51.655539  994624 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:51.655595  994624 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:51.691423  994624 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:51.691455  994624 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:51.691508  994624 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.691795  994624 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.691800  994624 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.691932  994624 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.692015  994624 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.692204  994624 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.692383  994624 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0830 22:19:51.693156  994624 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.693256  994624 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.693294  994624 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.693393  994624 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.693613  994624 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.693700  994624 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0830 22:19:51.693767  994624 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.694704  994624 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.695502  994624 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.858227  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.862141  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.862588  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.864659  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.872937  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0830 22:19:51.885087  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.912710  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.970615  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.978831  994624 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0830 22:19:51.978883  994624 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.978930  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.004057  994624 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0830 22:19:52.004112  994624 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:52.004153  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.031261  994624 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0830 22:19:52.031330  994624 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:52.031350  994624 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0830 22:19:52.031393  994624 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:52.031456  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.031394  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168753  994624 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0830 22:19:52.168817  994624 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:52.168842  994624 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0830 22:19:52.168760  994624 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0830 22:19:52.168882  994624 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:52.168906  994624 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:52.168931  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168944  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168948  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:52.168877  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168988  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:52.169048  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:52.169156  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:52.218220  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0830 22:19:52.218353  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.235432  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:52.235565  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0830 22:19:52.235575  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:52.235692  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:52.246243  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:52.246437  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0830 22:19:52.246550  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:52.260976  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0830 22:19:52.261024  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0830 22:19:52.261041  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.261090  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.261090  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:19:52.262450  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0830 22:19:52.316437  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0830 22:19:52.316556  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:19:52.316709  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0830 22:19:52.316807  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:19:52.330026  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0830 22:19:52.330185  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0830 22:19:52.330318  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:19:53.207917  995603 api_server.go:269] stopped: https://192.168.39.10:8443/healthz: Get "https://192.168.39.10:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0830 22:19:53.207968  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:54.224442  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:54.224482  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:54.724967  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:54.732845  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0830 22:19:54.732880  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0830 22:19:55.224677  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:55.231265  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0830 22:19:55.231302  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0830 22:19:55.725325  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:55.731785  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0830 22:19:55.739996  995603 api_server.go:141] control plane version: v1.16.0
	I0830 22:19:55.740025  995603 api_server.go:131] duration metric: took 7.533643458s to wait for apiserver health ...
	I0830 22:19:55.740037  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:55.740046  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:55.742083  995603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:52.462806  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:54.462856  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:56.962847  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:55.697808  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (3.436622341s)
	I0830 22:19:55.697847  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0830 22:19:55.697882  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1: (3.381312107s)
	I0830 22:19:55.697895  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0830 22:19:55.697927  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (3.436796784s)
	I0830 22:19:55.697959  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0830 22:19:55.697985  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1: (3.381155963s)
	I0830 22:19:55.698014  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0830 22:19:55.697989  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:55.698035  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.367694611s)
	I0830 22:19:55.698065  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0830 22:19:55.698072  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:57.158231  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.460131868s)
	I0830 22:19:57.158266  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0830 22:19:57.158302  994624 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:57.158371  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:55.743724  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:55.755829  995603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:55.777604  995603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:55.792182  995603 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:55.792221  995603 system_pods.go:61] "coredns-5644d7b6d9-872nn" [acd3b375-2486-48c3-9032-6386a091128a] Running
	I0830 22:19:55.792232  995603 system_pods.go:61] "coredns-5644d7b6d9-lqn5v" [48a574c1-b546-4060-9aba-1e2bcdaf7541] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:55.792240  995603 system_pods.go:61] "etcd-old-k8s-version-250163" [8d4eb3c4-a10b-4803-a1cd-28199081480d] Running
	I0830 22:19:55.792247  995603 system_pods.go:61] "kube-apiserver-old-k8s-version-250163" [c2cb0944-0836-4419-9bcf-8b6ddcb8de4f] Running
	I0830 22:19:55.792253  995603 system_pods.go:61] "kube-controller-manager-old-k8s-version-250163" [953d90e1-21ec-47a8-916a-9641616443a3] Running
	I0830 22:19:55.792259  995603 system_pods.go:61] "kube-proxy-qg82w" [58c1bd37-de42-46db-8337-cad3969dbbe3] Running
	I0830 22:19:55.792265  995603 system_pods.go:61] "kube-scheduler-old-k8s-version-250163" [ead115ca-3faa-457a-a29d-6de753bf53ab] Running
	I0830 22:19:55.792271  995603 system_pods.go:61] "storage-provisioner" [e481c13c-17b5-4a76-8f56-01decf4d2dde] Running
	I0830 22:19:55.792278  995603 system_pods.go:74] duration metric: took 14.654143ms to wait for pod list to return data ...
	I0830 22:19:55.792291  995603 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:55.800500  995603 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:55.800529  995603 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:55.800541  995603 node_conditions.go:105] duration metric: took 8.245305ms to run NodePressure ...
	I0830 22:19:55.800572  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:56.165598  995603 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:56.173177  995603 retry.go:31] will retry after 155.771258ms: kubelet not initialised
	I0830 22:19:56.335243  995603 retry.go:31] will retry after 435.88083ms: kubelet not initialised
	I0830 22:19:56.900108  995603 retry.go:31] will retry after 318.649581ms: kubelet not initialised
	I0830 22:19:57.226618  995603 retry.go:31] will retry after 906.607144ms: kubelet not initialised
	I0830 22:19:58.169644  995603 retry.go:31] will retry after 1.480507319s: kubelet not initialised
	I0830 22:19:59.662899  995603 retry.go:31] will retry after 1.43965579s: kubelet not initialised
	I0830 22:19:59.462944  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:01.463843  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:01.109412  995603 retry.go:31] will retry after 2.769965791s: kubelet not initialised
	I0830 22:20:03.884087  995603 retry.go:31] will retry after 5.524462984s: kubelet not initialised
	I0830 22:20:03.962393  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:06.463083  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:03.920494  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.762089682s)
	I0830 22:20:03.920528  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0830 22:20:03.920563  994624 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:20:03.920618  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:20:05.471647  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.551002795s)
	I0830 22:20:05.471696  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0830 22:20:05.471725  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:20:05.471808  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:20:07.437922  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.966087689s)
	I0830 22:20:07.437952  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0830 22:20:07.437986  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:20:07.438046  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:20:09.418426  995603 retry.go:31] will retry after 8.161662984s: kubelet not initialised
	I0830 22:20:08.961616  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:10.962062  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:09.894897  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.456819743s)
	I0830 22:20:09.894932  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0830 22:20:09.895001  994624 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:20:09.895072  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:20:10.848591  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0830 22:20:10.848635  994624 cache_images.go:123] Successfully loaded all cached images
	I0830 22:20:10.848641  994624 cache_images.go:92] LoadImages completed in 19.157171696s
	I0830 22:20:10.848726  994624 ssh_runner.go:195] Run: crio config
	I0830 22:20:10.912483  994624 cni.go:84] Creating CNI manager for ""
	I0830 22:20:10.912514  994624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:20:10.912545  994624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:20:10.912574  994624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.28 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-698195 NodeName:no-preload-698195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:20:10.912729  994624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-698195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:20:10.912793  994624 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-698195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-698195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:20:10.912850  994624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:20:10.922383  994624 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:20:10.922470  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:20:10.931904  994624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0830 22:20:10.947603  994624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:20:10.963835  994624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0830 22:20:10.982645  994624 ssh_runner.go:195] Run: grep 192.168.72.28	control-plane.minikube.internal$ /etc/hosts
	I0830 22:20:10.986493  994624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:20:10.999967  994624 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195 for IP: 192.168.72.28
	I0830 22:20:11.000000  994624 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:11.000190  994624 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:20:11.000252  994624 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:20:11.000348  994624 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.key
	I0830 22:20:11.000455  994624 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.key.f951a290
	I0830 22:20:11.000518  994624 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.key
	I0830 22:20:11.000668  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:20:11.000712  994624 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:20:11.000728  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:20:11.000844  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:20:11.000881  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:20:11.000917  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:20:11.000978  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:20:11.001876  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:20:11.025256  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:20:11.048414  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:20:11.072696  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:20:11.097029  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:20:11.123653  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:20:11.152564  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:20:11.180885  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:20:11.204194  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:20:11.227365  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:20:11.249804  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:20:11.272563  994624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:20:11.289225  994624 ssh_runner.go:195] Run: openssl version
	I0830 22:20:11.295235  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:20:11.304745  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.309554  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.309615  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.314775  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:20:11.327372  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:20:11.338944  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.344731  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.344797  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.350242  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:20:11.359913  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:20:11.369367  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.373467  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.373511  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.378731  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:20:11.387877  994624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:20:11.392496  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:20:11.398057  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:20:11.403555  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:20:11.409343  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:20:11.414914  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:20:11.420465  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:20:11.425887  994624 kubeadm.go:404] StartCluster: {Name:no-preload-698195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-698195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:20:11.425988  994624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:20:11.426031  994624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:20:11.458215  994624 cri.go:89] found id: ""
	I0830 22:20:11.458307  994624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:20:11.468981  994624 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:20:11.469010  994624 kubeadm.go:636] restartCluster start
	I0830 22:20:11.469068  994624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:20:11.478113  994624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:11.479707  994624 kubeconfig.go:92] found "no-preload-698195" server: "https://192.168.72.28:8443"
	I0830 22:20:11.483097  994624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:20:11.492068  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:11.492123  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:11.502752  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:11.502766  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:11.502803  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:11.514139  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:12.014881  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:12.014982  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:12.027078  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:12.514591  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:12.514686  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:12.529329  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.014971  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:13.015068  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:13.026874  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.514310  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:13.514395  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:13.526406  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.461372  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:15.961535  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:14.014646  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:14.014750  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:14.026467  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:14.515116  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:14.515212  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:14.527110  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:15.014622  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:15.014713  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:15.026083  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:15.515205  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:15.515304  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:15.530248  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:16.014368  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:16.014472  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:16.025785  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:16.514315  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:16.514390  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:16.525823  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.014305  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:17.014410  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:17.025657  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.515255  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:17.515331  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:17.527967  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:18.014524  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:18.014603  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:18.025912  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:18.514454  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:18.514533  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:18.526034  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.586022  995603 retry.go:31] will retry after 7.910874514s: kubelet not initialised
	I0830 22:20:18.460574  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:20.460727  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:19.014477  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:19.014563  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:19.025688  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:19.514231  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:19.514318  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:19.526253  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:20.014551  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:20.014632  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:20.026223  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:20.515044  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:20.515142  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:20.526336  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:21.014933  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:21.015017  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:21.026315  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:21.492708  994624 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:20:21.492739  994624 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:20:21.492755  994624 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:20:21.492837  994624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:20:21.528882  994624 cri.go:89] found id: ""
	I0830 22:20:21.528979  994624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:20:21.545258  994624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:20:21.554325  994624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:20:21.554387  994624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:20:21.563086  994624 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:20:21.563121  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:21.688507  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.342362  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.552586  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.618512  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.699936  994624 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:20:22.700029  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:22.715983  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:23.231090  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:23.730985  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:22.462833  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:24.462913  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:26.960795  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:24.230937  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:24.730685  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:25.230888  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:25.256876  994624 api_server.go:72] duration metric: took 2.556939469s to wait for apiserver process to appear ...
	I0830 22:20:25.256907  994624 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:20:25.256929  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:25.502804  995603 retry.go:31] will retry after 19.65596925s: kubelet not initialised
	I0830 22:20:28.908329  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:20:28.908366  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:20:28.908382  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:28.973483  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:20:28.973534  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:20:29.474026  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:29.480796  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:20:29.480850  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:20:29.974406  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:29.981421  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:20:29.981453  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:20:30.474452  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:30.479311  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0830 22:20:30.490550  994624 api_server.go:141] control plane version: v1.28.1
	I0830 22:20:30.490581  994624 api_server.go:131] duration metric: took 5.233664737s to wait for apiserver health ...
	I0830 22:20:30.490621  994624 cni.go:84] Creating CNI manager for ""
	I0830 22:20:30.490634  994624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:20:30.492834  994624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:20:28.962577  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:31.461661  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:30.494469  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:20:30.508611  994624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:20:30.536470  994624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:20:30.547285  994624 system_pods.go:59] 8 kube-system pods found
	I0830 22:20:30.547321  994624 system_pods.go:61] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:20:30.547339  994624 system_pods.go:61] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:20:30.547352  994624 system_pods.go:61] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:20:30.547361  994624 system_pods.go:61] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:20:30.547369  994624 system_pods.go:61] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:20:30.547379  994624 system_pods.go:61] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:20:30.547391  994624 system_pods.go:61] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:20:30.547405  994624 system_pods.go:61] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:20:30.547416  994624 system_pods.go:74] duration metric: took 10.921869ms to wait for pod list to return data ...
	I0830 22:20:30.547428  994624 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:20:30.550787  994624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:20:30.550816  994624 node_conditions.go:123] node cpu capacity is 2
	I0830 22:20:30.550828  994624 node_conditions.go:105] duration metric: took 3.391486ms to run NodePressure ...
	I0830 22:20:30.550856  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:30.786117  994624 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:20:30.793653  994624 kubeadm.go:787] kubelet initialised
	I0830 22:20:30.793680  994624 kubeadm.go:788] duration metric: took 7.533543ms waiting for restarted kubelet to initialise ...
	I0830 22:20:30.793694  994624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:30.800474  994624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.808844  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.808869  994624 pod_ready.go:81] duration metric: took 8.371156ms waiting for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.808879  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.808888  994624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.823461  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "etcd-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.823487  994624 pod_ready.go:81] duration metric: took 14.590789ms waiting for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.823497  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "etcd-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.823504  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.834123  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-apiserver-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.834150  994624 pod_ready.go:81] duration metric: took 10.63758ms waiting for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.834158  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-apiserver-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.834164  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.951589  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.951620  994624 pod_ready.go:81] duration metric: took 117.448834ms waiting for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.951628  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.951635  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:31.343471  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-proxy-5fjvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.343497  994624 pod_ready.go:81] duration metric: took 391.855831ms waiting for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:31.343506  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-proxy-5fjvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.343512  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:31.741491  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-scheduler-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.741527  994624 pod_ready.go:81] duration metric: took 398.007277ms waiting for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:31.741539  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-scheduler-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.741555  994624 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:32.141918  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:32.141952  994624 pod_ready.go:81] duration metric: took 400.379332ms waiting for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:32.141961  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:32.141969  994624 pod_ready.go:38] duration metric: took 1.348263054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:32.141987  994624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:20:32.153800  994624 ops.go:34] apiserver oom_adj: -16
	I0830 22:20:32.153828  994624 kubeadm.go:640] restartCluster took 20.684809572s
	I0830 22:20:32.153848  994624 kubeadm.go:406] StartCluster complete in 20.727972693s
	I0830 22:20:32.153868  994624 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:32.153955  994624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:20:32.155765  994624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:32.156054  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:20:32.156162  994624 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:20:32.156265  994624 addons.go:69] Setting storage-provisioner=true in profile "no-preload-698195"
	I0830 22:20:32.156285  994624 addons.go:231] Setting addon storage-provisioner=true in "no-preload-698195"
	I0830 22:20:32.156288  994624 addons.go:69] Setting default-storageclass=true in profile "no-preload-698195"
	I0830 22:20:32.156307  994624 addons.go:69] Setting metrics-server=true in profile "no-preload-698195"
	I0830 22:20:32.156344  994624 addons.go:231] Setting addon metrics-server=true in "no-preload-698195"
	I0830 22:20:32.156318  994624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-698195"
	I0830 22:20:32.156396  994624 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	W0830 22:20:32.156293  994624 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:20:32.156512  994624 host.go:66] Checking if "no-preload-698195" exists ...
	W0830 22:20:32.156358  994624 addons.go:240] addon metrics-server should already be in state true
	I0830 22:20:32.156570  994624 host.go:66] Checking if "no-preload-698195" exists ...
	I0830 22:20:32.156821  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156847  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.156849  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156867  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.156948  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156961  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.165443  994624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-698195" context rescaled to 1 replicas
	I0830 22:20:32.165497  994624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:20:32.167715  994624 out.go:177] * Verifying Kubernetes components...
	I0830 22:20:32.169310  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:20:32.176341  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0830 22:20:32.176876  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0830 22:20:32.177070  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0830 22:20:32.177253  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177447  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177562  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177829  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.177856  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.178014  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.178032  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.178387  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179460  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179499  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.179517  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.179897  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179957  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.179996  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.180272  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.180293  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.180423  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.201009  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0830 22:20:32.201548  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.201926  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0830 22:20:32.202180  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.202200  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.202304  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.202785  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.202842  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.202865  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.203052  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.203202  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.203391  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.204424  994624 addons.go:231] Setting addon default-storageclass=true in "no-preload-698195"
	W0830 22:20:32.204450  994624 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:20:32.204491  994624 host.go:66] Checking if "no-preload-698195" exists ...
	I0830 22:20:32.204897  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.204931  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.205076  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.207516  994624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:20:32.206126  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.209336  994624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:20:32.210840  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:20:32.209276  994624 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:20:32.210862  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:20:32.210877  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:20:32.210890  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.210897  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.214370  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214385  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214769  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.214813  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.214829  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214841  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.215131  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.215199  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.215346  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.215387  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.215521  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.215580  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.215651  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.215748  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.244173  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I0830 22:20:32.244664  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.245311  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.245343  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.245760  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.246361  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.246416  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.263737  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0830 22:20:32.264177  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.264737  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.264761  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.265106  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.265342  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.266996  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.267406  994624 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:20:32.267430  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:20:32.267454  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.270345  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.270799  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.270829  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.271021  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.271215  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.271403  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.271526  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.362089  994624 node_ready.go:35] waiting up to 6m0s for node "no-preload-698195" to be "Ready" ...
	I0830 22:20:32.362281  994624 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0830 22:20:32.371216  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:20:32.372220  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:20:32.372240  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:20:32.396916  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:20:32.396942  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:20:32.417651  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:20:32.430668  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:20:32.430699  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:20:32.476147  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:20:33.655453  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.284190116s)
	I0830 22:20:33.655495  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.237806074s)
	I0830 22:20:33.655515  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655532  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.655519  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655602  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.655854  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.655875  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.655885  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655894  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656045  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656082  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656095  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656115  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656160  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656169  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656180  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.656195  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656394  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656432  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656437  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656455  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.656465  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656729  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656741  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656754  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.802947  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326756295s)
	I0830 22:20:33.802994  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.803016  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.803349  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.803371  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.803381  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.803391  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.803393  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.803632  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.803682  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.803700  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.803720  994624 addons.go:467] Verifying addon metrics-server=true in "no-preload-698195"
	I0830 22:20:33.805489  994624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:20:33.462414  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:35.961487  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:33.806934  994624 addons.go:502] enable addons completed in 1.650789204s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:20:34.550814  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:36.551274  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:38.551355  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:37.963175  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:40.462510  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:39.550464  994624 node_ready.go:49] node "no-preload-698195" has status "Ready":"True"
	I0830 22:20:39.550505  994624 node_ready.go:38] duration metric: took 7.188369926s waiting for node "no-preload-698195" to be "Ready" ...
	I0830 22:20:39.550516  994624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:39.556533  994624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.562470  994624 pod_ready.go:92] pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:39.562498  994624 pod_ready.go:81] duration metric: took 5.934964ms waiting for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.562511  994624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.568348  994624 pod_ready.go:92] pod "etcd-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:39.568371  994624 pod_ready.go:81] duration metric: took 5.853085ms waiting for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.568380  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:41.593857  994624 pod_ready.go:102] pod "kube-apiserver-no-preload-698195" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:42.594544  994624 pod_ready.go:92] pod "kube-apiserver-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.594572  994624 pod_ready.go:81] duration metric: took 3.026185311s waiting for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.594586  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.599820  994624 pod_ready.go:92] pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.599844  994624 pod_ready.go:81] duration metric: took 5.249213ms waiting for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.599856  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.751073  994624 pod_ready.go:92] pod "kube-proxy-5fjvd" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.751096  994624 pod_ready.go:81] duration metric: took 151.233562ms waiting for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.751105  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:43.150620  994624 pod_ready.go:92] pod "kube-scheduler-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:43.150646  994624 pod_ready.go:81] duration metric: took 399.535323ms waiting for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:43.150656  994624 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.464235  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:44.960831  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:46.961923  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:45.458489  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:47.958322  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:45.165236  995603 kubeadm.go:787] kubelet initialised
	I0830 22:20:45.165261  995603 kubeadm.go:788] duration metric: took 48.999634631s waiting for restarted kubelet to initialise ...
	I0830 22:20:45.165269  995603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:45.170939  995603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.176235  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.176259  995603 pod_ready.go:81] duration metric: took 5.296469ms waiting for pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.176271  995603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.180703  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.180718  995603 pod_ready.go:81] duration metric: took 4.44114ms waiting for pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.180725  995603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.185225  995603 pod_ready.go:92] pod "etcd-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.185244  995603 pod_ready.go:81] duration metric: took 4.512736ms waiting for pod "etcd-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.185255  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.190403  995603 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.190425  995603 pod_ready.go:81] duration metric: took 5.162774ms waiting for pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.190436  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.564427  995603 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.564460  995603 pod_ready.go:81] duration metric: took 374.00421ms waiting for pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.564473  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qg82w" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.964836  995603 pod_ready.go:92] pod "kube-proxy-qg82w" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.964857  995603 pod_ready.go:81] duration metric: took 400.377393ms waiting for pod "kube-proxy-qg82w" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.964866  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:46.364023  995603 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:46.364046  995603 pod_ready.go:81] duration metric: took 399.172301ms waiting for pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:46.364060  995603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:48.672124  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:48.962198  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.461425  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:49.958485  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.959424  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.170855  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:53.172690  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:53.962708  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:56.461729  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:54.458026  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:56.458124  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:58.459811  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:55.669393  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:57.670454  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:59.670654  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:58.463098  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:00.962495  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:00.960274  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:03.457998  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:02.170872  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:04.670725  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:03.460674  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:05.461496  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:05.459727  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:07.959179  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:06.671066  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.169869  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:07.463765  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.961943  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.959351  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:12.458921  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:11.171435  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:13.171597  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:12.461881  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:14.961416  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:14.459572  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:16.960064  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:15.670176  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:18.170049  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:17.460985  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:19.462323  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:21.963325  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:19.459085  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:21.460169  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:20.671600  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:23.169931  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:24.464683  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:26.962740  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:23.958014  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:26.458502  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:28.458654  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:25.670985  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:28.171321  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:29.461798  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:31.961714  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:30.464431  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:32.958557  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:30.669588  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:32.670695  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.671313  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.463531  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:36.960658  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.960256  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:37.460047  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:37.168958  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:39.170995  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:38.961145  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:40.961870  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:39.958213  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:41.958373  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:41.670302  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:44.171198  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:43.461666  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:45.461738  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:44.459123  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:46.459226  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:48.459428  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:46.670708  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:48.671826  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:47.462306  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:49.462771  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:51.962010  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:50.958149  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:52.958493  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:51.169610  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:53.170386  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:54.461116  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:56.959735  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:54.959069  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:57.458784  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:55.172123  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:57.670323  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:59.671985  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:58.961225  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:00.961822  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:59.959058  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:01.959700  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:02.170880  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:04.171473  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:02.961938  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:05.461758  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:03.960213  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:06.458196  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:08.458500  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:06.671998  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:09.169979  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:07.962031  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:10.460716  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:10.960753  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:13.459638  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:11.669885  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:13.670821  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:12.461433  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:14.463156  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:16.961558  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:15.459765  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:17.959192  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:15.671350  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:18.170569  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:19.462375  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:21.961785  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:19.959308  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:22.457592  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:20.173424  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:22.671008  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:23.961985  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:25.962149  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:24.458343  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:26.958471  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:25.169264  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:27.181579  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:29.670923  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:27.964954  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:30.461530  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:29.458262  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:31.463334  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:32.171662  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:34.670239  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:32.961287  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:34.961787  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:33.957827  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:35.958367  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:37.960259  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:36.671642  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:39.169834  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:37.462107  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:39.961576  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:41.961773  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:40.458367  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:42.458710  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:41.671303  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:44.170994  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:43.964448  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.461777  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:44.958652  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.960005  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.171108  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:48.670866  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:48.462315  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:50.462456  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:49.459011  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:51.958137  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:51.170020  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:53.171135  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:52.462694  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:54.962055  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:53.958728  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:55.959278  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:57.959555  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:55.671421  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:58.169881  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:57.461322  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:59.461865  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:01.963541  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:00.458148  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:02.458834  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:00.170265  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:02.170719  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:04.670111  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:03.967458  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:05.972793  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:04.958722  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:07.458954  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:06.670434  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:08.671269  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:08.461195  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:10.961859  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:09.458999  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:11.958146  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:11.170482  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.670156  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.462648  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.463851  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.958659  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.962293  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:18.458707  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.670647  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:18.170462  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:17.960881  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:19.962032  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:20.959370  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:23.459653  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:20.670329  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:23.169817  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:22.461024  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:24.461537  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:26.960897  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:25.958696  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:28.459488  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:25.671024  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:28.170228  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:29.461009  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:31.461891  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:30.958318  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:32.958723  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:30.170683  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:32.670966  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:33.462005  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:35.960841  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:34.959278  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.458068  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:35.170093  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.671411  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.961501  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:40.460893  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:39.458824  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:41.461623  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:40.170169  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:42.670892  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:42.461840  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:43.154742  995192 pod_ready.go:81] duration metric: took 4m0.000931927s waiting for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	E0830 22:23:43.154776  995192 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:23:43.154798  995192 pod_ready.go:38] duration metric: took 4m7.830262728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:23:43.154853  995192 kubeadm.go:640] restartCluster took 4m30.336637887s
	W0830 22:23:43.154961  995192 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0830 22:23:43.155001  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0830 22:23:43.959940  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:46.458406  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:45.170898  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:47.670457  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:48.957451  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:51.457818  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:50.171371  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:52.171468  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:54.670175  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:53.958105  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:56.458176  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:57.169990  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:59.177173  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:58.957583  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:00.958404  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:02.958866  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:01.670484  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:03.671368  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:05.457466  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:07.457893  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:05.671480  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:08.170128  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:09.458376  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:11.958335  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:10.171221  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:12.171398  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:14.171694  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:14.432406  995192 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.277378744s)
	I0830 22:24:14.432498  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:14.446038  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:24:14.455354  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:24:14.464292  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:24:14.464332  995192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 22:24:14.680764  995192 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:24:13.965662  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:16.460984  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:16.171891  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:18.671072  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:18.958205  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:20.959096  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:23.459244  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:20.671733  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:22.671947  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:24.677772  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:24.927380  995192 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 22:24:24.927462  995192 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:24:24.927559  995192 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:24:24.927697  995192 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:24:24.927843  995192 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:24:24.927938  995192 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:24:24.929775  995192 out.go:204]   - Generating certificates and keys ...
	I0830 22:24:24.929895  995192 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:24:24.930004  995192 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:24:24.930118  995192 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 22:24:24.930202  995192 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0830 22:24:24.930321  995192 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0830 22:24:24.930408  995192 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0830 22:24:24.930485  995192 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0830 22:24:24.930559  995192 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0830 22:24:24.930658  995192 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 22:24:24.930756  995192 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 22:24:24.930821  995192 kubeadm.go:322] [certs] Using the existing "sa" key
	I0830 22:24:24.930922  995192 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:24:24.931009  995192 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:24:24.931077  995192 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:24:24.931170  995192 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:24:24.931245  995192 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:24:24.931354  995192 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:24:24.931430  995192 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:24:24.934341  995192 out.go:204]   - Booting up control plane ...
	I0830 22:24:24.934422  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:24:24.934524  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:24:24.934580  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:24:24.934689  995192 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:24:24.934770  995192 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:24:24.934809  995192 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:24:24.934936  995192 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:24:24.935014  995192 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003378 seconds
	I0830 22:24:24.935150  995192 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:24:24.935261  995192 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:24:24.935317  995192 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:24:24.935490  995192 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-791007 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 22:24:24.935540  995192 kubeadm.go:322] [bootstrap-token] Using token: 3t39h1.cgypp2756rpdn3ql
	I0830 22:24:24.937035  995192 out.go:204]   - Configuring RBAC rules ...
	I0830 22:24:24.937140  995192 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:24:24.937246  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 22:24:24.937428  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:24:24.937619  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:24:24.937762  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:24:24.937883  995192 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:24:24.938044  995192 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 22:24:24.938105  995192 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:24:24.938178  995192 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:24:24.938197  995192 kubeadm.go:322] 
	I0830 22:24:24.938277  995192 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:24:24.938290  995192 kubeadm.go:322] 
	I0830 22:24:24.938389  995192 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:24:24.938398  995192 kubeadm.go:322] 
	I0830 22:24:24.938429  995192 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:24:24.938506  995192 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:24:24.938577  995192 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:24:24.938586  995192 kubeadm.go:322] 
	I0830 22:24:24.938658  995192 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 22:24:24.938681  995192 kubeadm.go:322] 
	I0830 22:24:24.938745  995192 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 22:24:24.938754  995192 kubeadm.go:322] 
	I0830 22:24:24.938825  995192 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:24:24.938930  995192 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:24:24.939065  995192 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:24:24.939076  995192 kubeadm.go:322] 
	I0830 22:24:24.939160  995192 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 22:24:24.939266  995192 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:24:24.939280  995192 kubeadm.go:322] 
	I0830 22:24:24.939367  995192 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 3t39h1.cgypp2756rpdn3ql \
	I0830 22:24:24.939452  995192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:24:24.939473  995192 kubeadm.go:322] 	--control-plane 
	I0830 22:24:24.939479  995192 kubeadm.go:322] 
	I0830 22:24:24.939597  995192 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:24:24.939606  995192 kubeadm.go:322] 
	I0830 22:24:24.939685  995192 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 3t39h1.cgypp2756rpdn3ql \
	I0830 22:24:24.939848  995192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:24:24.939880  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:24:24.939916  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:24:24.942544  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:24:24.943961  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:24:24.990449  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:24:25.040966  995192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:24:25.041042  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.041041  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=default-k8s-diff-port-791007 minikube.k8s.io/updated_at=2023_08_30T22_24_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.441321  995192 ops.go:34] apiserver oom_adj: -16
	I0830 22:24:25.441492  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.557357  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:26.163303  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:26.663721  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.459794  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.957287  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.171894  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:29.671326  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.163474  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:27.664036  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:28.163187  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:28.663338  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.163719  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.663846  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:30.163288  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:30.663346  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:31.163165  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:31.663996  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.958583  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:31.960227  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:31.671923  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:34.171143  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:32.163631  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:32.663347  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:33.163634  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:33.663228  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:34.163600  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:34.663994  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:35.163597  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:35.663419  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:36.163764  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:36.663168  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:37.163646  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:37.663613  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:38.163643  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:38.264223  995192 kubeadm.go:1081] duration metric: took 13.22324453s to wait for elevateKubeSystemPrivileges.
	I0830 22:24:38.264262  995192 kubeadm.go:406] StartCluster complete in 5m25.484553135s
	I0830 22:24:38.264286  995192 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:24:38.264411  995192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:24:38.266553  995192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:24:38.266800  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:24:38.266990  995192 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:24:38.267105  995192 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267117  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:24:38.267126  995192 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.267141  995192 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:24:38.267163  995192 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267184  995192 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267209  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.267214  995192 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.267234  995192 addons.go:240] addon metrics-server should already be in state true
	I0830 22:24:38.267207  995192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-791007"
	I0830 22:24:38.267330  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.267664  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267735  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.267806  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267797  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267851  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.267866  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.285812  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0830 22:24:38.286287  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.287008  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.287036  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.287384  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0830 22:24:38.287488  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I0830 22:24:38.287526  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.287808  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.287949  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.288154  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.288200  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.288370  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.288516  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.288582  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.288562  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.288947  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.289135  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.289343  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.289569  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.289610  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.299364  995192 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.299392  995192 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:24:38.299422  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.299824  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.299861  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.305325  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0830 22:24:38.305834  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.306214  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I0830 22:24:38.306525  995192 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-791007" context rescaled to 1 replicas
	I0830 22:24:38.306561  995192 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:24:38.308424  995192 out.go:177] * Verifying Kubernetes components...
	I0830 22:24:38.306646  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.306688  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.309840  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:38.309911  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.310245  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.310362  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.310381  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.310433  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.310801  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.310980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.312319  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.314072  995192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:24:38.313018  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.315723  995192 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:24:38.315742  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:24:38.315759  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.317188  995192 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:24:34.457685  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:36.458268  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.459052  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:36.171434  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.173228  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.318441  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:24:38.318465  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:24:38.318488  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.319537  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.320338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.320365  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.320640  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.321238  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.321431  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.321733  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.322284  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.322691  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.322778  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.322887  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.323058  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.323205  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.323265  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.328412  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0830 22:24:38.328853  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.329468  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.329479  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.329898  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.330379  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.330395  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.345318  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0830 22:24:38.345781  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.346309  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.346329  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.346665  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.346886  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.348620  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.348922  995192 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:24:38.348941  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:24:38.348961  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.351758  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.352206  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.352233  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.352357  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.352562  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.352787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.352918  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.474078  995192 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-791007" to be "Ready" ...
	I0830 22:24:38.474205  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:24:38.479269  995192 node_ready.go:49] node "default-k8s-diff-port-791007" has status "Ready":"True"
	I0830 22:24:38.479294  995192 node_ready.go:38] duration metric: took 5.181356ms waiting for node "default-k8s-diff-port-791007" to be "Ready" ...
	I0830 22:24:38.479305  995192 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:38.486715  995192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ck692" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:38.508419  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:24:38.508443  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:24:38.515075  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:24:38.532789  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:24:38.549460  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:24:38.549488  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:24:38.593580  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:24:38.593614  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:24:38.637965  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:24:40.093211  995192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.618968297s)
	I0830 22:24:40.093259  995192 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0830 22:24:40.526723  995192 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ck692" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ck692" not found
	I0830 22:24:40.526748  995192 pod_ready.go:81] duration metric: took 2.040009497s waiting for pod "coredns-5dd5756b68-ck692" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:40.526757  995192 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ck692" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ck692" not found
	I0830 22:24:40.526765  995192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:40.552258  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.037149365s)
	I0830 22:24:40.552312  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019488451s)
	I0830 22:24:40.552317  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552381  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552351  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552696  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.552714  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.552724  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552734  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552891  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.552902  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.552918  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552927  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.553018  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.553114  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553132  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.553170  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.553202  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553210  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.553219  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.553225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.553478  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553493  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.776628  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.138598233s)
	I0830 22:24:40.776714  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.776731  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.777199  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.777224  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.777246  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.777256  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.777270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.777546  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.777626  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.777647  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.777667  995192 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-791007"
	I0830 22:24:40.779719  995192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:24:40.781134  995192 addons.go:502] enable addons completed in 2.51415908s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:24:40.459185  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:42.958731  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.150847  994624 pod_ready.go:81] duration metric: took 4m0.000170406s waiting for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:43.150881  994624 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:24:43.150893  994624 pod_ready.go:38] duration metric: took 4m3.600363648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:43.150919  994624 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:24:43.150964  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:43.151043  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:43.199383  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:43.199412  994624 cri.go:89] found id: ""
	I0830 22:24:43.199420  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:43.199479  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.204289  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:43.204371  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:43.247303  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:43.247329  994624 cri.go:89] found id: ""
	I0830 22:24:43.247340  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:43.247396  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.252955  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:43.253024  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:43.286292  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:43.286318  994624 cri.go:89] found id: ""
	I0830 22:24:43.286327  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:43.286386  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.290585  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:43.290653  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:43.323616  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:43.323645  994624 cri.go:89] found id: ""
	I0830 22:24:43.323655  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:43.323729  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.328256  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:43.328326  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:43.363566  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:43.363595  994624 cri.go:89] found id: ""
	I0830 22:24:43.363605  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:43.363666  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.368006  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:43.368067  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:43.403728  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:43.403752  994624 cri.go:89] found id: ""
	I0830 22:24:43.403761  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:43.403833  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.407957  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:43.408020  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:43.438864  994624 cri.go:89] found id: ""
	I0830 22:24:43.438893  994624 logs.go:284] 0 containers: []
	W0830 22:24:43.438903  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:43.438911  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:43.438976  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:43.478905  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:43.478935  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:43.478942  994624 cri.go:89] found id: ""
	I0830 22:24:43.478951  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:43.479015  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.486919  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.496040  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:43.496070  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:43.669727  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:43.669764  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:43.712471  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:43.712508  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:43.746949  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:43.746988  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:42.573674  995192 pod_ready.go:92] pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.573706  995192 pod_ready.go:81] duration metric: took 2.046935361s waiting for pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.573716  995192 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.579433  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.579450  995192 pod_ready.go:81] duration metric: took 5.72841ms waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.579458  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.584499  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.584519  995192 pod_ready.go:81] duration metric: took 5.055504ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.584527  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.678045  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.678071  995192 pod_ready.go:81] duration metric: took 93.537153ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.678084  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbdvk" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.082548  995192 pod_ready.go:92] pod "kube-proxy-bbdvk" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:43.082576  995192 pod_ready.go:81] duration metric: took 404.485397ms waiting for pod "kube-proxy-bbdvk" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.082585  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.479813  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:43.479840  995192 pod_ready.go:81] duration metric: took 397.248046ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.479851  995192 pod_ready.go:38] duration metric: took 5.000533366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:43.479872  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:24:43.479956  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:24:43.498558  995192 api_server.go:72] duration metric: took 5.191959207s to wait for apiserver process to appear ...
	I0830 22:24:43.498583  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:24:43.498603  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:24:43.504260  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:24:43.505566  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:24:43.505589  995192 api_server.go:131] duration metric: took 6.997863ms to wait for apiserver health ...
	I0830 22:24:43.505598  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:24:43.682798  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:24:43.682837  995192 system_pods.go:61] "coredns-5dd5756b68-jwn87" [984f4b65-9261-4952-a368-5fac2fa14bd7] Running
	I0830 22:24:43.682846  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [156cdcfd-fa81-4542-8506-18b3ab61f725] Running
	I0830 22:24:43.682856  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [841dcf3a-9ab5-4fbf-a20a-4179d4a793fd] Running
	I0830 22:24:43.682863  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [4cef1264-90fb-47fc-a155-4cb267c961aa] Running
	I0830 22:24:43.682870  995192 system_pods.go:61] "kube-proxy-bbdvk" [dd98a34a-f2f9-4e73-a751-e68a1addb89f] Running
	I0830 22:24:43.682876  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [11bf5dce-8d54-4029-a9d2-423e278b6181] Running
	I0830 22:24:43.682887  995192 system_pods.go:61] "metrics-server-57f55c9bc5-dllmg" [6826d918-a2ac-4744-8145-f6d7599499af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:43.682897  995192 system_pods.go:61] "storage-provisioner" [fb41168e-19d2-4b57-a2fb-ab0b3d0ff836] Running
	I0830 22:24:43.682909  995192 system_pods.go:74] duration metric: took 177.304345ms to wait for pod list to return data ...
	I0830 22:24:43.682919  995192 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:24:43.878616  995192 default_sa.go:45] found service account: "default"
	I0830 22:24:43.878643  995192 default_sa.go:55] duration metric: took 195.70884ms for default service account to be created ...
	I0830 22:24:43.878654  995192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:24:44.083123  995192 system_pods.go:86] 8 kube-system pods found
	I0830 22:24:44.083155  995192 system_pods.go:89] "coredns-5dd5756b68-jwn87" [984f4b65-9261-4952-a368-5fac2fa14bd7] Running
	I0830 22:24:44.083161  995192 system_pods.go:89] "etcd-default-k8s-diff-port-791007" [156cdcfd-fa81-4542-8506-18b3ab61f725] Running
	I0830 22:24:44.083165  995192 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-791007" [841dcf3a-9ab5-4fbf-a20a-4179d4a793fd] Running
	I0830 22:24:44.083170  995192 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-791007" [4cef1264-90fb-47fc-a155-4cb267c961aa] Running
	I0830 22:24:44.083177  995192 system_pods.go:89] "kube-proxy-bbdvk" [dd98a34a-f2f9-4e73-a751-e68a1addb89f] Running
	I0830 22:24:44.083181  995192 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-791007" [11bf5dce-8d54-4029-a9d2-423e278b6181] Running
	I0830 22:24:44.083187  995192 system_pods.go:89] "metrics-server-57f55c9bc5-dllmg" [6826d918-a2ac-4744-8145-f6d7599499af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:44.083194  995192 system_pods.go:89] "storage-provisioner" [fb41168e-19d2-4b57-a2fb-ab0b3d0ff836] Running
	I0830 22:24:44.083203  995192 system_pods.go:126] duration metric: took 204.542978ms to wait for k8s-apps to be running ...
	I0830 22:24:44.083216  995192 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:24:44.083297  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:44.098110  995192 system_svc.go:56] duration metric: took 14.88196ms WaitForService to wait for kubelet.
	I0830 22:24:44.098143  995192 kubeadm.go:581] duration metric: took 5.7915497s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:24:44.098211  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:24:44.278751  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:24:44.278802  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:24:44.278814  995192 node_conditions.go:105] duration metric: took 180.597923ms to run NodePressure ...
	I0830 22:24:44.278825  995192 start.go:228] waiting for startup goroutines ...
	I0830 22:24:44.278831  995192 start.go:233] waiting for cluster config update ...
	I0830 22:24:44.278841  995192 start.go:242] writing updated cluster config ...
	I0830 22:24:44.279208  995192 ssh_runner.go:195] Run: rm -f paused
	I0830 22:24:44.332074  995192 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:24:44.334502  995192 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-791007" cluster and "default" namespace by default
	I0830 22:24:40.672327  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.171136  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.780116  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:43.780147  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:43.824462  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:43.824494  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:43.875847  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:43.875881  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:43.937533  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:43.937582  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:43.950917  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:43.950948  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:43.989236  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:43.989265  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:44.025171  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:44.025218  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:44.644566  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:44.644609  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:44.692321  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:44.692356  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:47.229304  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:24:47.252442  994624 api_server.go:72] duration metric: took 4m15.086891336s to wait for apiserver process to appear ...
	I0830 22:24:47.252476  994624 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:24:47.252521  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:47.252593  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:47.286367  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:47.286397  994624 cri.go:89] found id: ""
	I0830 22:24:47.286410  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:47.286461  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.290812  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:47.290883  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:47.324349  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:47.324376  994624 cri.go:89] found id: ""
	I0830 22:24:47.324386  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:47.324440  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.329002  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:47.329072  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:47.362954  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:47.362985  994624 cri.go:89] found id: ""
	I0830 22:24:47.362996  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:47.363062  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.367498  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:47.367587  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:47.398450  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:47.398478  994624 cri.go:89] found id: ""
	I0830 22:24:47.398489  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:47.398550  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.402646  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:47.402741  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:47.438663  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:47.438691  994624 cri.go:89] found id: ""
	I0830 22:24:47.438701  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:47.438769  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.443046  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:47.443114  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:47.472698  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:47.472725  994624 cri.go:89] found id: ""
	I0830 22:24:47.472733  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:47.472792  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.477075  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:47.477150  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:47.507099  994624 cri.go:89] found id: ""
	I0830 22:24:47.507138  994624 logs.go:284] 0 containers: []
	W0830 22:24:47.507148  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:47.507157  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:47.507232  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:47.540635  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:47.540661  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:47.540667  994624 cri.go:89] found id: ""
	I0830 22:24:47.540676  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:47.540734  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.545274  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.549659  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:47.549681  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:47.605419  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:47.605460  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:47.646819  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:47.646856  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:47.684772  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:47.684801  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:47.731741  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:47.731791  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:47.762713  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:47.762745  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:48.266510  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:48.266557  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:48.315124  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:48.315164  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:48.332407  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:48.332447  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:48.463670  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:48.463710  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:48.498034  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:48.498067  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:48.528326  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:48.528372  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:48.563858  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:48.563893  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:45.670559  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:46.364206  995603 pod_ready.go:81] duration metric: took 4m0.000126235s waiting for pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:46.364246  995603 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:24:46.364267  995603 pod_ready.go:38] duration metric: took 4m1.19899008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:46.364298  995603 kubeadm.go:640] restartCluster took 5m11.375966766s
	W0830 22:24:46.364364  995603 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0830 22:24:46.364394  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0830 22:24:51.095064  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:24:51.106674  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0830 22:24:51.108320  994624 api_server.go:141] control plane version: v1.28.1
	I0830 22:24:51.108339  994624 api_server.go:131] duration metric: took 3.855856321s to wait for apiserver health ...
	I0830 22:24:51.108347  994624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:24:51.108375  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:51.108422  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:51.140030  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:51.140059  994624 cri.go:89] found id: ""
	I0830 22:24:51.140069  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:51.140133  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.144302  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:51.144375  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:51.181915  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:51.181944  994624 cri.go:89] found id: ""
	I0830 22:24:51.181953  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:51.182007  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.187104  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:51.187171  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:51.220763  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:51.220794  994624 cri.go:89] found id: ""
	I0830 22:24:51.220806  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:51.220890  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.225368  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:51.225443  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:51.263131  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:51.263155  994624 cri.go:89] found id: ""
	I0830 22:24:51.263164  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:51.263231  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.268531  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:51.268586  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:51.307119  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:51.307145  994624 cri.go:89] found id: ""
	I0830 22:24:51.307154  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:51.307224  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.311914  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:51.311988  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:51.341363  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:51.341391  994624 cri.go:89] found id: ""
	I0830 22:24:51.341402  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:51.341461  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.345501  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:51.345570  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:51.378276  994624 cri.go:89] found id: ""
	I0830 22:24:51.378311  994624 logs.go:284] 0 containers: []
	W0830 22:24:51.378322  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:51.378329  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:51.378398  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:51.416207  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:51.416228  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:51.416232  994624 cri.go:89] found id: ""
	I0830 22:24:51.416245  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:51.416295  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.421034  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.424911  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:51.424938  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:51.458543  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:51.458576  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:51.489189  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:51.489223  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:52.074879  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:52.074924  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:52.091316  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:52.091357  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:52.131564  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:52.131602  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:52.168850  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:52.168879  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:52.200329  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:52.200358  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:52.230767  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:52.230799  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:52.276139  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:52.276177  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:52.330487  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:52.330523  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:52.469305  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:52.469336  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:52.536395  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:52.536432  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:55.089149  994624 system_pods.go:59] 8 kube-system pods found
	I0830 22:24:55.089184  994624 system_pods.go:61] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running
	I0830 22:24:55.089194  994624 system_pods.go:61] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running
	I0830 22:24:55.089198  994624 system_pods.go:61] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running
	I0830 22:24:55.089203  994624 system_pods.go:61] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running
	I0830 22:24:55.089207  994624 system_pods.go:61] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running
	I0830 22:24:55.089211  994624 system_pods.go:61] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running
	I0830 22:24:55.089217  994624 system_pods.go:61] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:55.089224  994624 system_pods.go:61] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running
	I0830 22:24:55.089230  994624 system_pods.go:74] duration metric: took 3.980877363s to wait for pod list to return data ...
	I0830 22:24:55.089237  994624 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:24:55.091833  994624 default_sa.go:45] found service account: "default"
	I0830 22:24:55.091862  994624 default_sa.go:55] duration metric: took 2.618667ms for default service account to be created ...
	I0830 22:24:55.091871  994624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:24:55.098108  994624 system_pods.go:86] 8 kube-system pods found
	I0830 22:24:55.098145  994624 system_pods.go:89] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running
	I0830 22:24:55.098154  994624 system_pods.go:89] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running
	I0830 22:24:55.098163  994624 system_pods.go:89] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running
	I0830 22:24:55.098179  994624 system_pods.go:89] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running
	I0830 22:24:55.098190  994624 system_pods.go:89] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running
	I0830 22:24:55.098201  994624 system_pods.go:89] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running
	I0830 22:24:55.098212  994624 system_pods.go:89] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:55.098233  994624 system_pods.go:89] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running
	I0830 22:24:55.098241  994624 system_pods.go:126] duration metric: took 6.364144ms to wait for k8s-apps to be running ...
	I0830 22:24:55.098250  994624 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:24:55.098297  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:55.114382  994624 system_svc.go:56] duration metric: took 16.118629ms WaitForService to wait for kubelet.
	I0830 22:24:55.114413  994624 kubeadm.go:581] duration metric: took 4m22.94887118s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:24:55.114435  994624 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:24:55.118227  994624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:24:55.118256  994624 node_conditions.go:123] node cpu capacity is 2
	I0830 22:24:55.118272  994624 node_conditions.go:105] duration metric: took 3.832437ms to run NodePressure ...
	I0830 22:24:55.118287  994624 start.go:228] waiting for startup goroutines ...
	I0830 22:24:55.118295  994624 start.go:233] waiting for cluster config update ...
	I0830 22:24:55.118309  994624 start.go:242] writing updated cluster config ...
	I0830 22:24:55.118611  994624 ssh_runner.go:195] Run: rm -f paused
	I0830 22:24:55.169756  994624 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:24:55.172028  994624 out.go:177] * Done! kubectl is now configured to use "no-preload-698195" cluster and "default" namespace by default
	I0830 22:25:09.359961  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (22.995525599s)
	I0830 22:25:09.360040  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:25:09.375757  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:25:09.385118  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:25:09.394601  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:25:09.394640  995603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0830 22:25:09.454824  995603 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0830 22:25:09.455022  995603 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:25:09.599893  995603 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:25:09.600055  995603 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:25:09.600213  995603 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:25:09.783920  995603 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:25:09.784082  995603 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:25:09.793193  995603 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0830 22:25:09.902777  995603 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:25:09.904820  995603 out.go:204]   - Generating certificates and keys ...
	I0830 22:25:09.904937  995603 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:25:09.905035  995603 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:25:09.905150  995603 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 22:25:09.905241  995603 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0830 22:25:09.905350  995603 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0830 22:25:09.905423  995603 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0830 22:25:09.905540  995603 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0830 22:25:09.905622  995603 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0830 22:25:09.905799  995603 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 22:25:09.905918  995603 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 22:25:09.905978  995603 kubeadm.go:322] [certs] Using the existing "sa" key
	I0830 22:25:09.906052  995603 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:25:10.141265  995603 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:25:10.238428  995603 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:25:10.387118  995603 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:25:10.620307  995603 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:25:10.625802  995603 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:25:10.627926  995603 out.go:204]   - Booting up control plane ...
	I0830 22:25:10.629866  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:25:10.635839  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:25:10.638800  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:25:10.641079  995603 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:25:10.666312  995603 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:25:20.671894  995603 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004868 seconds
	I0830 22:25:20.672078  995603 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:25:20.687003  995603 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:25:21.215417  995603 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:25:21.215657  995603 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-250163 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0830 22:25:21.726398  995603 kubeadm.go:322] [bootstrap-token] Using token: y3ik1i.subqwfsto1ck6o9y
	I0830 22:25:21.728095  995603 out.go:204]   - Configuring RBAC rules ...
	I0830 22:25:21.728243  995603 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:25:21.735828  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:25:21.741247  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:25:21.744588  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:25:21.747966  995603 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:25:21.835002  995603 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:25:22.157106  995603 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:25:22.157129  995603 kubeadm.go:322] 
	I0830 22:25:22.157207  995603 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:25:22.157221  995603 kubeadm.go:322] 
	I0830 22:25:22.157343  995603 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:25:22.157373  995603 kubeadm.go:322] 
	I0830 22:25:22.157410  995603 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:25:22.157493  995603 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:25:22.157572  995603 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:25:22.157581  995603 kubeadm.go:322] 
	I0830 22:25:22.157661  995603 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:25:22.157779  995603 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:25:22.157877  995603 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:25:22.157894  995603 kubeadm.go:322] 
	I0830 22:25:22.158002  995603 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0830 22:25:22.158104  995603 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:25:22.158119  995603 kubeadm.go:322] 
	I0830 22:25:22.158250  995603 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y3ik1i.subqwfsto1ck6o9y \
	I0830 22:25:22.158415  995603 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:25:22.158457  995603 kubeadm.go:322]     --control-plane 	  
	I0830 22:25:22.158467  995603 kubeadm.go:322] 
	I0830 22:25:22.158555  995603 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:25:22.158566  995603 kubeadm.go:322] 
	I0830 22:25:22.158674  995603 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y3ik1i.subqwfsto1ck6o9y \
	I0830 22:25:22.158820  995603 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:25:22.159148  995603 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:25:22.159192  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:25:22.159205  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:25:22.160970  995603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:25:22.162353  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:25:22.173835  995603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:25:22.192193  995603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:25:22.192332  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=old-k8s-version-250163 minikube.k8s.io/updated_at=2023_08_30T22_25_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.192335  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.440832  995603 ops.go:34] apiserver oom_adj: -16
	I0830 22:25:22.441067  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.560349  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:23.171762  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:23.671955  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:24.171789  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:24.671863  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:25.172176  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:25.672262  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:26.172348  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:26.672680  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:27.171856  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:27.671722  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:28.171712  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:28.671959  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:29.171914  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:29.672320  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:30.171688  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:30.671958  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:31.172481  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:31.672528  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:32.172583  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:32.672562  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:33.171839  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:33.672125  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:34.172515  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:34.672643  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:35.172469  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:35.672444  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:36.171897  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:36.672260  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:37.171900  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:37.332591  995603 kubeadm.go:1081] duration metric: took 15.140354535s to wait for elevateKubeSystemPrivileges.
	I0830 22:25:37.332635  995603 kubeadm.go:406] StartCluster complete in 6m2.391789918s
	I0830 22:25:37.332659  995603 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:25:37.332770  995603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:25:37.334722  995603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:25:37.334991  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:25:37.335087  995603 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:25:37.335217  995603 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335241  995603 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-250163"
	W0830 22:25:37.335253  995603 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:25:37.335313  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.335317  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:25:37.335322  995603 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335342  995603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-250163"
	I0830 22:25:37.335345  995603 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335380  995603 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-250163"
	W0830 22:25:37.335391  995603 addons.go:240] addon metrics-server should already be in state true
	I0830 22:25:37.335440  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.335753  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335807  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.335807  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335847  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.335810  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335939  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.355619  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I0830 22:25:37.355760  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43941
	I0830 22:25:37.355979  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0830 22:25:37.356166  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356203  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356595  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356729  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.356748  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.356730  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.356793  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.357097  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.357114  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.357170  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357177  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357383  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.357486  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357825  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.357857  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.358246  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.358292  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.373639  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I0830 22:25:37.374107  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.374639  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.374657  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.375035  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.375359  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.377439  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.379303  995603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:25:37.378176  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0830 22:25:37.380617  995603 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-250163"
	W0830 22:25:37.380661  995603 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:25:37.380706  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.380787  995603 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:25:37.380802  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:25:37.380826  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.381081  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.381123  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.381726  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.382284  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.382304  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.382656  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.382878  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.384791  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.387018  995603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:25:37.385098  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.385806  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.388841  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.388863  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.388865  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:25:37.388883  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:25:37.388907  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.389015  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.389121  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.389274  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.392059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.392538  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.392557  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.392720  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.392861  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.392989  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.393101  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.399504  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I0830 22:25:37.399592  995603 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-250163" context rescaled to 1 replicas
	I0830 22:25:37.399627  995603 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:25:37.401322  995603 out.go:177] * Verifying Kubernetes components...
	I0830 22:25:37.400205  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.402915  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:25:37.403460  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.403485  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.403872  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.404488  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.404537  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.420598  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0830 22:25:37.421352  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.422218  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.422240  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.422714  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.422979  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.424750  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.425396  995603 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:25:37.425415  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:25:37.425439  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.428142  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.428731  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.428762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.428900  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.429077  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.429330  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.429469  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.705452  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:25:37.713345  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:25:37.736333  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:25:37.736356  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:25:37.825018  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:25:37.825051  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:25:37.858566  995603 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-250163" to be "Ready" ...
	I0830 22:25:37.858657  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:25:37.888050  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:25:37.888082  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:25:37.901662  995603 node_ready.go:49] node "old-k8s-version-250163" has status "Ready":"True"
	I0830 22:25:37.901689  995603 node_ready.go:38] duration metric: took 43.090996ms waiting for node "old-k8s-version-250163" to be "Ready" ...
	I0830 22:25:37.901701  995603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:25:37.928785  995603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:37.960479  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:25:39.232573  995603 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mx7ff" not found
	I0830 22:25:39.232603  995603 pod_ready.go:81] duration metric: took 1.303781463s waiting for pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace to be "Ready" ...
	E0830 22:25:39.232616  995603 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mx7ff" not found
	I0830 22:25:39.232630  995603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:39.305932  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.600438988s)
	I0830 22:25:39.306003  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306018  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306031  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592647384s)
	I0830 22:25:39.306084  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306106  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306088  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.447402831s)
	I0830 22:25:39.306222  995603 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0830 22:25:39.306459  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306481  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306485  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306512  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306518  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306534  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306517  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306608  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306628  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306638  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306862  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306903  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306911  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306946  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306972  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306981  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306993  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.307001  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.307338  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.307387  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.307407  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.425740  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.465201154s)
	I0830 22:25:39.425823  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.425844  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.426221  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.426260  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.426272  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.426289  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.426311  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.426584  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.426620  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.426638  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.426657  995603 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-250163"
	I0830 22:25:39.428628  995603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:25:39.430476  995603 addons.go:502] enable addons completed in 2.095405793s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:25:40.785067  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace has status "Ready":"True"
	I0830 22:25:40.785090  995603 pod_ready.go:81] duration metric: took 1.552452887s waiting for pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.785100  995603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-866k8" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.790132  995603 pod_ready.go:92] pod "kube-proxy-866k8" in "kube-system" namespace has status "Ready":"True"
	I0830 22:25:40.790158  995603 pod_ready.go:81] duration metric: took 5.051684ms waiting for pod "kube-proxy-866k8" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.790173  995603 pod_ready.go:38] duration metric: took 2.888452893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:25:40.790199  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:25:40.790247  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:25:40.805458  995603 api_server.go:72] duration metric: took 3.405792506s to wait for apiserver process to appear ...
	I0830 22:25:40.805488  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:25:40.805512  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:25:40.812389  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0830 22:25:40.813455  995603 api_server.go:141] control plane version: v1.16.0
	I0830 22:25:40.813483  995603 api_server.go:131] duration metric: took 7.983448ms to wait for apiserver health ...
	I0830 22:25:40.813520  995603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:25:40.818720  995603 system_pods.go:59] 4 kube-system pods found
	I0830 22:25:40.818741  995603 system_pods.go:61] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:40.818746  995603 system_pods.go:61] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:40.818754  995603 system_pods.go:61] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:40.818763  995603 system_pods.go:61] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:40.818768  995603 system_pods.go:74] duration metric: took 5.239623ms to wait for pod list to return data ...
	I0830 22:25:40.818776  995603 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:25:40.821982  995603 default_sa.go:45] found service account: "default"
	I0830 22:25:40.822001  995603 default_sa.go:55] duration metric: took 3.215755ms for default service account to be created ...
	I0830 22:25:40.822010  995603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:25:40.824823  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:40.824844  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:40.824850  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:40.824860  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:40.824871  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:40.824896  995603 retry.go:31] will retry after 244.703972ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.075793  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.075829  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.075838  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.075849  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.075860  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.075886  995603 retry.go:31] will retry after 325.650304ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.407202  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.407234  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.407242  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.407252  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.407262  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.407313  995603 retry.go:31] will retry after 449.708915ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.862007  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.862038  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.862043  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.862061  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.862070  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.862086  995603 retry.go:31] will retry after 484.451835ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:42.351597  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:42.351637  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:42.351646  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:42.351656  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:42.351664  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:42.351680  995603 retry.go:31] will retry after 739.711019ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:43.096340  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:43.096365  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:43.096371  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:43.096380  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:43.096387  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:43.096402  995603 retry.go:31] will retry after 871.763135ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:43.974914  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:43.974947  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:43.974954  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:43.974964  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:43.974973  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:43.974994  995603 retry.go:31] will retry after 1.11275286s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:45.093268  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:45.093293  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:45.093299  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:45.093306  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:45.093313  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:45.093329  995603 retry.go:31] will retry after 1.015840649s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:46.114920  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:46.114954  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:46.114961  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:46.114972  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:46.114982  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:46.115002  995603 retry.go:31] will retry after 1.822388925s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:47.942838  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:47.942870  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:47.942877  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:47.942887  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:47.942900  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:47.942920  995603 retry.go:31] will retry after 1.516432463s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:49.464430  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:49.464460  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:49.464465  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:49.464473  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:49.464480  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:49.464496  995603 retry.go:31] will retry after 2.558675876s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:52.028440  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:52.028469  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:52.028474  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:52.028481  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:52.028488  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:52.028503  995603 retry.go:31] will retry after 2.801664105s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:54.835174  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:54.835200  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:54.835205  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:54.835212  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:54.835219  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:54.835243  995603 retry.go:31] will retry after 3.386411543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:58.228062  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:58.228104  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:58.228113  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:58.228123  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:58.228136  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:58.228158  995603 retry.go:31] will retry after 5.58749509s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:03.822486  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:26:03.822511  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:03.822516  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:03.822523  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:03.822530  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:03.822548  995603 retry.go:31] will retry after 6.26222599s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:10.092537  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:26:10.092563  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:10.092569  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:10.092576  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:10.092582  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:10.092599  995603 retry.go:31] will retry after 6.680813015s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:16.780093  995603 system_pods.go:86] 5 kube-system pods found
	I0830 22:26:16.780120  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:16.780125  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Pending
	I0830 22:26:16.780130  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:16.780138  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:16.780145  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:16.780161  995603 retry.go:31] will retry after 9.963152707s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:26.749177  995603 system_pods.go:86] 7 kube-system pods found
	I0830 22:26:26.749205  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:26.749211  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Running
	I0830 22:26:26.749215  995603 system_pods.go:89] "kube-controller-manager-old-k8s-version-250163" [dfb636c2-5a87-4d9a-97c0-2fd763d52b69] Running
	I0830 22:26:26.749219  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:26.749223  995603 system_pods.go:89] "kube-scheduler-old-k8s-version-250163" [9d0c93a7-5cad-4a40-9d3d-3b828e33dca9] Pending
	I0830 22:26:26.749230  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:26.749237  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:26.749252  995603 retry.go:31] will retry after 8.744971537s: missing components: etcd, kube-scheduler
	I0830 22:26:35.500731  995603 system_pods.go:86] 8 kube-system pods found
	I0830 22:26:35.500759  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:35.500765  995603 system_pods.go:89] "etcd-old-k8s-version-250163" [260642d3-280e-4ae1-97bc-d15a904b3205] Running
	I0830 22:26:35.500769  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Running
	I0830 22:26:35.500775  995603 system_pods.go:89] "kube-controller-manager-old-k8s-version-250163" [dfb636c2-5a87-4d9a-97c0-2fd763d52b69] Running
	I0830 22:26:35.500779  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:35.500783  995603 system_pods.go:89] "kube-scheduler-old-k8s-version-250163" [9d0c93a7-5cad-4a40-9d3d-3b828e33dca9] Running
	I0830 22:26:35.500789  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:35.500796  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:35.500813  995603 system_pods.go:126] duration metric: took 54.67879848s to wait for k8s-apps to be running ...
	I0830 22:26:35.500827  995603 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:26:35.500876  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:26:35.519861  995603 system_svc.go:56] duration metric: took 19.021631ms WaitForService to wait for kubelet.
	I0830 22:26:35.519900  995603 kubeadm.go:581] duration metric: took 58.120243521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:26:35.519985  995603 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:26:35.524455  995603 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:26:35.524486  995603 node_conditions.go:123] node cpu capacity is 2
	I0830 22:26:35.524537  995603 node_conditions.go:105] duration metric: took 4.543152ms to run NodePressure ...
	I0830 22:26:35.524550  995603 start.go:228] waiting for startup goroutines ...
	I0830 22:26:35.524562  995603 start.go:233] waiting for cluster config update ...
	I0830 22:26:35.524573  995603 start.go:242] writing updated cluster config ...
	I0830 22:26:35.524938  995603 ssh_runner.go:195] Run: rm -f paused
	I0830 22:26:35.578723  995603 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0830 22:26:35.580954  995603 out.go:177] 
	W0830 22:26:35.582332  995603 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0830 22:26:35.583700  995603 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0830 22:26:35.585290  995603 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-250163" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:18:56 UTC, ends at Wed 2023-08-30 22:33:46 UTC. --
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.812361177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f1046a49-6471-43ed-a454-e0f054d3ee52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.812575999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f1046a49-6471-43ed-a454-e0f054d3ee52 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.847462452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c288497a-e076-4558-aec5-7a4e26ff4454 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.847549801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c288497a-e076-4558-aec5-7a4e26ff4454 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.847702007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c288497a-e076-4558-aec5-7a4e26ff4454 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.883673745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8a7a3018-48df-46b0-a5c1-895a22a7faa5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.883760599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8a7a3018-48df-46b0-a5c1-895a22a7faa5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.883998900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8a7a3018-48df-46b0-a5c1-895a22a7faa5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.922974637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0f921da2-2036-4e0a-a70b-374deeb1a7eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.923064925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0f921da2-2036-4e0a-a70b-374deeb1a7eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.923347771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0f921da2-2036-4e0a-a70b-374deeb1a7eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.968546546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a383b0ba-96e5-4526-8877-1fab0d79d5cb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.968690781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a383b0ba-96e5-4526-8877-1fab0d79d5cb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:45 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:45.968905500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a383b0ba-96e5-4526-8877-1fab0d79d5cb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.011849392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1daa1db5-efad-479d-8b37-4d96f8c8126e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.011942710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1daa1db5-efad-479d-8b37-4d96f8c8126e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.012121830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1daa1db5-efad-479d-8b37-4d96f8c8126e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.042222656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3e92f93e-a6b9-46e2-9393-98384f3e7d99 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.042394510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3e92f93e-a6b9-46e2-9393-98384f3e7d99 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.042553202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3e92f93e-a6b9-46e2-9393-98384f3e7d99 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.061454972Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=49827164-efe8-4a81-8479-0fb032b40aeb name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.061678155Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:673ae295095c4bab4be736de4d383e671f01655a530626690a42bbbef8c96b3f,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-dllmg,Uid:6826d918-a2ac-4744-8145-f6d7599499af,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434280965055784,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-dllmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6826d918-a2ac-4744-8145-f6d7599499af,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T22:24:40.626564944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fb41168e-19d2-4b57-a2fb-ab0b
3d0ff836,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434280902430955,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-08-30T22:24:40.565833720Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jwn87,Uid:984f4b65-9261-4952-a368-5fac2fa14bd7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434279835519368,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T22:24:37.997860503Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&PodSandboxMetadata{Name:kube-proxy-bbdvk,Uid:dd98a34a-f2f9-4e
73-a751-e68a1addb89f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434279606603794,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T22:24:37.770236227Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-791007,Uid:99fcf05bcab8afc51c97c0772eeb6a59,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434256397077473,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 99fcf05bcab8afc51c97c0772eeb6a59,kubernetes.io/config.seen: 2023-08-30T22:24:15.855781634Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-791007,Uid:a079395c9162847b9a330dbc46de23e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434256384943636,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a079395c9162847b9a330dbc46de23e4,kubernetes.io/config.seen: 2023-08-30T22:24:15.855782375Z,kubernetes.io/config.source: file,},
RuntimeHandler:,},&PodSandbox{Id:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-791007,Uid:51efe9c4dd41db71e0ba21bdab389ceb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434256366165009,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.104:8444,kubernetes.io/config.hash: 51efe9c4dd41db71e0ba21bdab389ceb,kubernetes.io/config.seen: 2023-08-30T22:24:15.855780454Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-791007,Uid:6ba8db4d7d99fb8d
7abe6ba67dadb480,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434256360473997,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7abe6ba67dadb480,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.104:2379,kubernetes.io/config.hash: 6ba8db4d7d99fb8d7abe6ba67dadb480,kubernetes.io/config.seen: 2023-08-30T22:24:15.855777003Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=49827164-efe8-4a81-8479-0fb032b40aeb name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.063579129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=205bdf49-cdb7-4905-9743-164d8e09cad3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.063674296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=205bdf49-cdb7-4905-9743-164d8e09cad3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:33:46 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:33:46.064086221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=205bdf49-cdb7-4905-9743-164d8e09cad3 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	bded3689c729f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3f8191e0d2d11
	5b8928ed58904       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   9 minutes ago       Running             kube-proxy                0                   1c20c28c70756
	78554fafd9dc7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   a3a6163233a28
	a5597f1b16dd0       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   9 minutes ago       Running             kube-controller-manager   2                   e3e258fe8fa0f
	9d020424185d5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   413acaa739447
	7a575d95cbfee       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   9 minutes ago       Running             kube-scheduler            2                   36b343bc687e1
	0a27e2279b8df       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   9 minutes ago       Running             kube-apiserver            2                   7641f3d5c0e64
	
	* 
	* ==> coredns [78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-791007
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-791007
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=default-k8s-diff-port-791007
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T22_24_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:24:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-791007
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 22:33:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:29:52 +0000   Wed, 30 Aug 2023 22:24:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:29:52 +0000   Wed, 30 Aug 2023 22:24:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:29:52 +0000   Wed, 30 Aug 2023 22:24:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:29:52 +0000   Wed, 30 Aug 2023 22:24:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.104
	  Hostname:    default-k8s-diff-port-791007
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 27c45b64d2c140e0acc60d76ccf1ce71
	  System UUID:                27c45b64-d2c1-40e0-acc6-0d76ccf1ce71
	  Boot ID:                    ab5f50e2-016c-4e34-9579-6ff6f84608a5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jwn87                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-default-k8s-diff-port-791007                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-791007             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-791007    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-bbdvk                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-default-k8s-diff-port-791007             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-dllmg                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node default-k8s-diff-port-791007 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m21s                  kubelet          Node default-k8s-diff-port-791007 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node default-k8s-diff-port-791007 event: Registered Node default-k8s-diff-port-791007 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug30 22:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072083] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.308731] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.509461] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150959] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.440847] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug30 22:19] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.113319] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.149093] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.112613] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.223553] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[ +17.199617] systemd-fstab-generator[922]: Ignoring "noauto" for root device
	[ +21.577474] kauditd_printk_skb: 29 callbacks suppressed
	[Aug30 22:24] systemd-fstab-generator[3528]: Ignoring "noauto" for root device
	[  +9.284315] systemd-fstab-generator[3853]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02] <==
	* {"level":"info","ts":"2023-08-30T22:24:18.976567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2683187f48860faf switched to configuration voters=(2775088730347016111)"}
	{"level":"info","ts":"2023-08-30T22:24:18.976695Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"52768e7a29070bd9","local-member-id":"2683187f48860faf","added-peer-id":"2683187f48860faf","added-peer-peer-urls":["https://192.168.61.104:2380"]}
	{"level":"info","ts":"2023-08-30T22:24:18.977091Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-30T22:24:18.97735Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.104:2380"}
	{"level":"info","ts":"2023-08-30T22:24:18.977368Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.104:2380"}
	{"level":"info","ts":"2023-08-30T22:24:18.978564Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"2683187f48860faf","initial-advertise-peer-urls":["https://192.168.61.104:2380"],"listen-peer-urls":["https://192.168.61.104:2380"],"advertise-client-urls":["https://192.168.61.104:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.104:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-30T22:24:18.978664Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-30T22:24:19.072442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2683187f48860faf is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-30T22:24:19.072498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2683187f48860faf became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-30T22:24:19.072514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2683187f48860faf received MsgPreVoteResp from 2683187f48860faf at term 1"}
	{"level":"info","ts":"2023-08-30T22:24:19.072525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2683187f48860faf became candidate at term 2"}
	{"level":"info","ts":"2023-08-30T22:24:19.07253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2683187f48860faf received MsgVoteResp from 2683187f48860faf at term 2"}
	{"level":"info","ts":"2023-08-30T22:24:19.072539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2683187f48860faf became leader at term 2"}
	{"level":"info","ts":"2023-08-30T22:24:19.072545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2683187f48860faf elected leader 2683187f48860faf at term 2"}
	{"level":"info","ts":"2023-08-30T22:24:19.076769Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2683187f48860faf","local-member-attributes":"{Name:default-k8s-diff-port-791007 ClientURLs:[https://192.168.61.104:2379]}","request-path":"/0/members/2683187f48860faf/attributes","cluster-id":"52768e7a29070bd9","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T22:24:19.076963Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:24:19.077428Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:24:19.078437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.104:2379"}
	{"level":"info","ts":"2023-08-30T22:24:19.078504Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:24:19.079257Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-30T22:24:19.087365Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T22:24:19.114102Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-30T22:24:19.088506Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"52768e7a29070bd9","local-member-id":"2683187f48860faf","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:24:19.114341Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:24:19.114406Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  22:33:46 up 14 min,  0 users,  load average: 0.09, 0.22, 0.17
	Linux default-k8s-diff-port-791007 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587] <==
	* E0830 22:29:22.356684       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:29:22.357912       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:30:21.287718       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.106.253:443: connect: connection refused
	I0830 22:30:21.287914       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0830 22:30:22.357463       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:30:22.357531       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0830 22:30:22.357539       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0830 22:30:22.358893       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:30:22.358996       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:30:22.359011       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:31:21.287399       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.106.253:443: connect: connection refused
	I0830 22:31:21.287485       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0830 22:32:21.287790       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.106.253:443: connect: connection refused
	I0830 22:32:21.288033       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0830 22:32:22.357755       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:32:22.357815       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0830 22:32:22.357821       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0830 22:32:22.359192       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:32:22.359265       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:32:22.359330       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:33:21.288123       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.106.253:443: connect: connection refused
	I0830 22:33:21.288445       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df] <==
	* I0830 22:28:08.220807       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:28:37.743672       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:28:38.231421       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:29:07.750010       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:29:08.239848       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:29:37.755554       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:29:38.247792       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:30:07.761698       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:30:08.255861       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:30:37.768606       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:30:38.268061       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0830 22:30:42.090737       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="281.102µs"
	I0830 22:30:56.099580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="299.196µs"
	E0830 22:31:07.773809       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:31:08.277681       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:31:37.780624       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:31:38.287791       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:32:07.786056       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:32:08.295818       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:32:37.793748       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:32:38.305881       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:33:07.801690       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:33:08.314015       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:33:37.808386       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:33:38.324096       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856] <==
	* I0830 22:24:42.376912       1 server_others.go:69] "Using iptables proxy"
	I0830 22:24:42.419654       1 node.go:141] Successfully retrieved node IP: 192.168.61.104
	I0830 22:24:42.509410       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0830 22:24:42.509458       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0830 22:24:42.512888       1 server_others.go:152] "Using iptables Proxier"
	I0830 22:24:42.512983       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 22:24:42.513347       1 server.go:846] "Version info" version="v1.28.1"
	I0830 22:24:42.513384       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:24:42.514625       1 config.go:188] "Starting service config controller"
	I0830 22:24:42.514668       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 22:24:42.514687       1 config.go:97] "Starting endpoint slice config controller"
	I0830 22:24:42.514690       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 22:24:42.515258       1 config.go:315] "Starting node config controller"
	I0830 22:24:42.515368       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 22:24:42.614797       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 22:24:42.614833       1 shared_informer.go:318] Caches are synced for service config
	I0830 22:24:42.616390       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f] <==
	* E0830 22:24:21.443141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 22:24:21.443147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0830 22:24:21.443361       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0830 22:24:21.443493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0830 22:24:22.261922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 22:24:22.261994       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0830 22:24:22.294543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 22:24:22.294597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0830 22:24:22.298470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 22:24:22.298522       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0830 22:24:22.375716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 22:24:22.375773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0830 22:24:22.390957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 22:24:22.391012       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0830 22:24:22.397585       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0830 22:24:22.397639       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0830 22:24:22.519700       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0830 22:24:22.519755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0830 22:24:22.578684       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 22:24:22.578746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0830 22:24:22.683741       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 22:24:22.683801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0830 22:24:22.908921       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 22:24:22.909001       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0830 22:24:25.803659       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:18:56 UTC, ends at Wed 2023-08-30 22:33:46 UTC. --
	Aug 30 22:30:56 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:30:56.071184    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:31:08 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:31:08.070575    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:31:23 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:31:23.071524    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:31:25 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:31:25.198761    3860 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:31:25 default-k8s-diff-port-791007 kubelet[3860]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:31:25 default-k8s-diff-port-791007 kubelet[3860]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:31:25 default-k8s-diff-port-791007 kubelet[3860]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:31:37 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:31:37.072944    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:31:52 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:31:52.070936    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:32:05 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:32:05.072686    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:32:18 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:32:18.070844    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:32:25 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:32:25.197660    3860 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:32:25 default-k8s-diff-port-791007 kubelet[3860]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:32:25 default-k8s-diff-port-791007 kubelet[3860]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:32:25 default-k8s-diff-port-791007 kubelet[3860]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:32:32 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:32:32.071746    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:32:45 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:32:45.072262    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:32:58 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:32:58.071159    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:33:13 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:33:13.073987    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:33:25 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:33:25.071508    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:33:25 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:33:25.197100    3860 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:33:25 default-k8s-diff-port-791007 kubelet[3860]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:33:25 default-k8s-diff-port-791007 kubelet[3860]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:33:25 default-k8s-diff-port-791007 kubelet[3860]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:33:38 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:33:38.071744    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	
	* 
	* ==> storage-provisioner [bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047] <==
	* I0830 22:24:42.427407       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 22:24:42.445101       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 22:24:42.445221       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 22:24:42.467974       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 22:24:42.469679       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-791007_fcee0bc5-5789-4dbf-99e4-500d5de68deb!
	I0830 22:24:42.469181       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a33a8bc6-bc66-4005-a6d7-a2d3f8629ead", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-791007_fcee0bc5-5789-4dbf-99e4-500d5de68deb became leader
	I0830 22:24:42.571413       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-791007_fcee0bc5-5789-4dbf-99e4-500d5de68deb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-791007 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-dllmg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-791007 describe pod metrics-server-57f55c9bc5-dllmg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-791007 describe pod metrics-server-57f55c9bc5-dllmg: exit status 1 (69.021322ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-dllmg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-791007 describe pod metrics-server-57f55c9bc5-dllmg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-698195 -n no-preload-698195
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-08-30 22:33:55.760639785 +0000 UTC m=+5077.953389716
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-698195 -n no-preload-698195
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-698195 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-698195 logs -n 25: (1.380286287s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-519738 -- sudo                         | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-519738                                 | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-184733                              | stopped-upgrade-184733       | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:09 UTC |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	| delete  | -p                                                     | disable-driver-mounts-883991 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | disable-driver-mounts-883991                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:12 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-698195             | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208903            | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-791007  | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC | 30 Aug 23 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC |                     |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-698195                  | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208903                 | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC | 30 Aug 23 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-250163        | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC | 30 Aug 23 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-791007       | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC | 30 Aug 23 22:24 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-250163             | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC | 30 Aug 23 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:16:59
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:16:59.758341  995603 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:16:59.758470  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758479  995603 out.go:309] Setting ErrFile to fd 2...
	I0830 22:16:59.758484  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758692  995603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:16:59.759241  995603 out.go:303] Setting JSON to false
	I0830 22:16:59.760232  995603 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14367,"bootTime":1693419453,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:16:59.760291  995603 start.go:138] virtualization: kvm guest
	I0830 22:16:59.762744  995603 out.go:177] * [old-k8s-version-250163] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:16:59.764395  995603 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:16:59.765863  995603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:16:59.764404  995603 notify.go:220] Checking for updates...
	I0830 22:16:59.767579  995603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:16:59.769244  995603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:16:59.771001  995603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:16:59.772625  995603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:16:59.774574  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:16:59.774929  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.775032  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.790271  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0830 22:16:59.790677  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.791257  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.791283  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.791645  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.791879  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.793885  995603 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:16:59.795414  995603 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:16:59.795716  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.795752  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.810316  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0830 22:16:59.810694  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.811176  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.811201  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.811560  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.811808  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.845962  995603 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:16:59.847399  995603 start.go:298] selected driver: kvm2
	I0830 22:16:59.847410  995603 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.847546  995603 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:16:59.848301  995603 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.848376  995603 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:16:59.862654  995603 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:16:59.863040  995603 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 22:16:59.863080  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:16:59.863094  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:16:59.863109  995603 start_flags.go:319] config:
	{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.863341  995603 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.865500  995603 out.go:177] * Starting control plane node old-k8s-version-250163 in cluster old-k8s-version-250163
	I0830 22:17:00.916070  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:16:59.866763  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:16:59.866836  995603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0830 22:16:59.866852  995603 cache.go:57] Caching tarball of preloaded images
	I0830 22:16:59.866935  995603 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:16:59.866946  995603 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0830 22:16:59.867091  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:16:59.867314  995603 start.go:365] acquiring machines lock for old-k8s-version-250163: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:17:06.996025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:10.068036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:16.148043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:19.220024  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:25.300036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:28.372088  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:34.452043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:37.524037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:43.604037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:46.676107  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:52.756100  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:55.828195  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:01.908025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:04.980079  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:11.060035  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:14.132025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:20.212050  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:23.283995  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:26.288205  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 4m29.4670209s
	I0830 22:18:26.288261  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:26.288276  994705 fix.go:54] fixHost starting: 
	I0830 22:18:26.288621  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:26.288656  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:26.304048  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0830 22:18:26.304613  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:26.305138  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:18:26.305164  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:26.305518  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:26.305719  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:26.305843  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:18:26.307597  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Stopped err=<nil>
	I0830 22:18:26.307639  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	W0830 22:18:26.307827  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:26.309985  994705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208903" ...
	I0830 22:18:26.311551  994705 main.go:141] libmachine: (embed-certs-208903) Calling .Start
	I0830 22:18:26.311750  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring networks are active...
	I0830 22:18:26.312528  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network default is active
	I0830 22:18:26.312814  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network mk-embed-certs-208903 is active
	I0830 22:18:26.313153  994705 main.go:141] libmachine: (embed-certs-208903) Getting domain xml...
	I0830 22:18:26.313857  994705 main.go:141] libmachine: (embed-certs-208903) Creating domain...
	I0830 22:18:26.285881  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:26.285939  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:18:26.288013  994624 machine.go:91] provisioned docker machine in 4m37.410947228s
	I0830 22:18:26.288063  994624 fix.go:56] fixHost completed within 4m37.432260867s
	I0830 22:18:26.288085  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 4m37.432330775s
	W0830 22:18:26.288107  994624 start.go:672] error starting host: provision: host is not running
	W0830 22:18:26.288219  994624 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0830 22:18:26.288225  994624 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:27.529120  994705 main.go:141] libmachine: (embed-certs-208903) Waiting to get IP...
	I0830 22:18:27.530028  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.530390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.530515  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.530404  996319 retry.go:31] will retry after 311.351139ms: waiting for machine to come up
	I0830 22:18:27.843013  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.843398  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.843427  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.843337  996319 retry.go:31] will retry after 367.953943ms: waiting for machine to come up
	I0830 22:18:28.213214  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.213785  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.213820  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.213722  996319 retry.go:31] will retry after 424.275825ms: waiting for machine to come up
	I0830 22:18:28.639216  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.639670  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.639707  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.639609  996319 retry.go:31] will retry after 502.321201ms: waiting for machine to come up
	I0830 22:18:29.143240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.143823  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.143850  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.143790  996319 retry.go:31] will retry after 680.495047ms: waiting for machine to come up
	I0830 22:18:29.825462  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.825879  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.825904  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.825836  996319 retry.go:31] will retry after 756.63617ms: waiting for machine to come up
	I0830 22:18:30.583723  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:30.584179  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:30.584212  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:30.584118  996319 retry.go:31] will retry after 851.722792ms: waiting for machine to come up
	I0830 22:18:31.437603  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:31.438031  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:31.438063  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:31.437986  996319 retry.go:31] will retry after 1.214893807s: waiting for machine to come up
	I0830 22:18:31.289961  994624 start.go:365] acquiring machines lock for no-preload-698195: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:32.654351  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:32.654803  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:32.654829  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:32.654756  996319 retry.go:31] will retry after 1.574180335s: waiting for machine to come up
	I0830 22:18:34.231491  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:34.231911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:34.231944  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:34.231826  996319 retry.go:31] will retry after 1.99107048s: waiting for machine to come up
	I0830 22:18:36.225911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:36.226336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:36.226363  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:36.226283  996319 retry.go:31] will retry after 1.816508761s: waiting for machine to come up
	I0830 22:18:38.044672  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:38.045061  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:38.045094  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:38.045021  996319 retry.go:31] will retry after 2.343148299s: waiting for machine to come up
	I0830 22:18:40.389346  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:40.389753  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:40.389778  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:40.389700  996319 retry.go:31] will retry after 3.682098761s: waiting for machine to come up
	I0830 22:18:45.025750  995192 start.go:369] acquired machines lock for "default-k8s-diff-port-791007" in 3m32.939054887s
	I0830 22:18:45.025823  995192 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:45.025847  995192 fix.go:54] fixHost starting: 
	I0830 22:18:45.026291  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:45.026333  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:45.041161  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0830 22:18:45.041657  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:45.042176  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:18:45.042208  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:45.042544  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:45.042748  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:18:45.042910  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:18:45.044428  995192 fix.go:102] recreateIfNeeded on default-k8s-diff-port-791007: state=Stopped err=<nil>
	I0830 22:18:45.044454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	W0830 22:18:45.044615  995192 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:45.046538  995192 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-791007" ...
	I0830 22:18:44.074916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075386  994705 main.go:141] libmachine: (embed-certs-208903) Found IP for machine: 192.168.50.159
	I0830 22:18:44.075411  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has current primary IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075418  994705 main.go:141] libmachine: (embed-certs-208903) Reserving static IP address...
	I0830 22:18:44.075899  994705 main.go:141] libmachine: (embed-certs-208903) Reserved static IP address: 192.168.50.159
	I0830 22:18:44.075928  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.075939  994705 main.go:141] libmachine: (embed-certs-208903) Waiting for SSH to be available...
	I0830 22:18:44.075959  994705 main.go:141] libmachine: (embed-certs-208903) DBG | skip adding static IP to network mk-embed-certs-208903 - found existing host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"}
	I0830 22:18:44.075968  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Getting to WaitForSSH function...
	I0830 22:18:44.078068  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.078436  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH client type: external
	I0830 22:18:44.078533  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa (-rw-------)
	I0830 22:18:44.078569  994705 main.go:141] libmachine: (embed-certs-208903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:18:44.078590  994705 main.go:141] libmachine: (embed-certs-208903) DBG | About to run SSH command:
	I0830 22:18:44.078622  994705 main.go:141] libmachine: (embed-certs-208903) DBG | exit 0
	I0830 22:18:44.167514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | SSH cmd err, output: <nil>: 
	I0830 22:18:44.167898  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetConfigRaw
	I0830 22:18:44.168594  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.170974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.171370  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171696  994705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/embed-certs-208903/config.json ...
	I0830 22:18:44.171967  994705 machine.go:88] provisioning docker machine ...
	I0830 22:18:44.171989  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:44.172184  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172371  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:18:44.172397  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172563  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.174522  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174861  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.174894  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174988  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.175159  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175286  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175413  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.175627  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.176111  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.176132  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:18:44.309192  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:18:44.309225  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.311931  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312327  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.312362  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312512  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.312727  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.312919  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.313048  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.313215  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.313623  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.313638  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:18:44.440529  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:44.440594  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:18:44.440641  994705 buildroot.go:174] setting up certificates
	I0830 22:18:44.440653  994705 provision.go:83] configureAuth start
	I0830 22:18:44.440663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.440943  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.443289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443663  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.443705  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443805  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.445987  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446297  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.446328  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446462  994705 provision.go:138] copyHostCerts
	I0830 22:18:44.446524  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:18:44.446550  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:18:44.446638  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:18:44.446750  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:18:44.446763  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:18:44.446800  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:18:44.446907  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:18:44.446919  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:18:44.446955  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:18:44.447036  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:18:44.664313  994705 provision.go:172] copyRemoteCerts
	I0830 22:18:44.664387  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:18:44.664434  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.666819  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667160  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.667192  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667338  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.667565  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.667687  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.667839  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:18:44.756922  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:18:44.780430  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:18:44.803396  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:18:44.825975  994705 provision.go:86] duration metric: configureAuth took 385.307932ms
	I0830 22:18:44.826006  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:18:44.826230  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:18:44.826334  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.828862  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829199  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.829240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829383  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.829606  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829770  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829907  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.830104  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.830593  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.830615  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:18:45.025539  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025585  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:18:45.025596  994705 machine.go:91] provisioned docker machine in 853.613637ms
	I0830 22:18:45.025627  994705 fix.go:56] fixHost completed within 18.737351046s
	I0830 22:18:45.025637  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 18.737393499s
	W0830 22:18:45.025662  994705 start.go:672] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	W0830 22:18:45.025746  994705 out.go:239] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025760  994705 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:45.047821  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Start
	I0830 22:18:45.047982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring networks are active...
	I0830 22:18:45.048684  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network default is active
	I0830 22:18:45.049040  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network mk-default-k8s-diff-port-791007 is active
	I0830 22:18:45.049401  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Getting domain xml...
	I0830 22:18:45.050009  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Creating domain...
	I0830 22:18:46.288943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting to get IP...
	I0830 22:18:46.289982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290359  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.290388  996430 retry.go:31] will retry after 228.105709ms: waiting for machine to come up
	I0830 22:18:46.519862  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520369  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.520342  996430 retry.go:31] will retry after 343.008473ms: waiting for machine to come up
	I0830 22:18:46.865023  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865426  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.865385  996430 retry.go:31] will retry after 467.017605ms: waiting for machine to come up
	I0830 22:18:50.028247  994705 start.go:365] acquiring machines lock for embed-certs-208903: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:47.334027  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334655  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.334600  996430 retry.go:31] will retry after 601.952764ms: waiting for machine to come up
	I0830 22:18:47.937980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.938387  996430 retry.go:31] will retry after 556.18277ms: waiting for machine to come up
	I0830 22:18:48.495747  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496130  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:48.496101  996430 retry.go:31] will retry after 696.126701ms: waiting for machine to come up
	I0830 22:18:49.193405  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193789  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193822  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:49.193752  996430 retry.go:31] will retry after 1.123021492s: waiting for machine to come up
	I0830 22:18:50.318326  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318710  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:50.318637  996430 retry.go:31] will retry after 1.198520166s: waiting for machine to come up
	I0830 22:18:51.518959  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519302  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519332  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:51.519244  996430 retry.go:31] will retry after 1.851352392s: waiting for machine to come up
	I0830 22:18:53.373208  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373676  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373713  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:53.373594  996430 retry.go:31] will retry after 1.789163964s: waiting for machine to come up
	I0830 22:18:55.164132  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164664  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:55.164587  996430 retry.go:31] will retry after 2.037803279s: waiting for machine to come up
	I0830 22:18:57.204503  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204957  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204984  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:57.204919  996430 retry.go:31] will retry after 3.365492251s: waiting for machine to come up
	I0830 22:19:00.572195  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:19:00.572533  996430 retry.go:31] will retry after 3.57478782s: waiting for machine to come up
	I0830 22:19:05.536665  995603 start.go:369] acquired machines lock for "old-k8s-version-250163" in 2m5.669275373s
	I0830 22:19:05.536730  995603 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:05.536751  995603 fix.go:54] fixHost starting: 
	I0830 22:19:05.537197  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:05.537240  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:05.556581  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0830 22:19:05.557016  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:05.557559  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:19:05.557590  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:05.557937  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:05.558124  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:05.558290  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:19:05.559829  995603 fix.go:102] recreateIfNeeded on old-k8s-version-250163: state=Stopped err=<nil>
	I0830 22:19:05.559871  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	W0830 22:19:05.560056  995603 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:05.562726  995603 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-250163" ...
	I0830 22:19:04.151280  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.151787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Found IP for machine: 192.168.61.104
	I0830 22:19:04.151820  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserving static IP address...
	I0830 22:19:04.151839  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has current primary IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.152254  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.152286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserved static IP address: 192.168.61.104
	I0830 22:19:04.152306  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | skip adding static IP to network mk-default-k8s-diff-port-791007 - found existing host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"}
	I0830 22:19:04.152324  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for SSH to be available...
	I0830 22:19:04.152339  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Getting to WaitForSSH function...
	I0830 22:19:04.154335  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.154701  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154791  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH client type: external
	I0830 22:19:04.154833  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa (-rw-------)
	I0830 22:19:04.154852  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:04.154868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | About to run SSH command:
	I0830 22:19:04.154879  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | exit 0
	I0830 22:19:04.251692  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:04.252182  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetConfigRaw
	I0830 22:19:04.252842  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.255184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255536  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.255571  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255850  995192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/config.json ...
	I0830 22:19:04.256118  995192 machine.go:88] provisioning docker machine ...
	I0830 22:19:04.256143  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:04.256344  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256504  995192 buildroot.go:166] provisioning hostname "default-k8s-diff-port-791007"
	I0830 22:19:04.256525  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256653  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.259010  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259366  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.259389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259509  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.259667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.260115  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.260787  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.260810  995192 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-791007 && echo "default-k8s-diff-port-791007" | sudo tee /etc/hostname
	I0830 22:19:04.403123  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-791007
	
	I0830 22:19:04.403166  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.405835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406219  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.406270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406476  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.406704  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.406892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.407047  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.407233  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.407634  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.407658  995192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-791007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-791007/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-791007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:04.549964  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:04.550002  995192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:04.550039  995192 buildroot.go:174] setting up certificates
	I0830 22:19:04.550053  995192 provision.go:83] configureAuth start
	I0830 22:19:04.550071  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.550422  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.552844  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553116  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.553150  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553313  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.555514  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.555880  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.555917  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.556036  995192 provision.go:138] copyHostCerts
	I0830 22:19:04.556100  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:04.556133  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:04.556213  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:04.556343  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:04.556354  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:04.556392  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:04.556485  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:04.556496  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:04.556528  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:04.556607  995192 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-791007 san=[192.168.61.104 192.168.61.104 localhost 127.0.0.1 minikube default-k8s-diff-port-791007]
	I0830 22:19:04.756354  995192 provision.go:172] copyRemoteCerts
	I0830 22:19:04.756413  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:04.756438  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.759134  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759511  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.759544  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759739  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.759977  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.760153  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.760297  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:04.858949  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:04.882455  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0830 22:19:04.905659  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:04.929876  995192 provision.go:86] duration metric: configureAuth took 379.794026ms
	I0830 22:19:04.929905  995192 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:04.930124  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:04.930228  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.932799  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933159  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.933192  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933316  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.933531  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933703  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.934015  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.934606  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.934633  995192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:05.266317  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:05.266349  995192 machine.go:91] provisioned docker machine in 1.010213866s
	I0830 22:19:05.266363  995192 start.go:300] post-start starting for "default-k8s-diff-port-791007" (driver="kvm2")
	I0830 22:19:05.266378  995192 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:05.266402  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.266764  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:05.266802  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.269938  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270300  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.270345  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270472  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.270650  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.270800  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.270922  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.365334  995192 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:05.369583  995192 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:05.369608  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:05.369701  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:05.369790  995192 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:05.369879  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:05.377933  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:05.401027  995192 start.go:303] post-start completed in 134.648062ms
	I0830 22:19:05.401051  995192 fix.go:56] fixHost completed within 20.37520461s
	I0830 22:19:05.401079  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.404156  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.404629  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404765  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.404960  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405138  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405260  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.405463  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:05.405917  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:05.405930  995192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:05.536449  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433945.485000324
	
	I0830 22:19:05.536479  995192 fix.go:206] guest clock: 1693433945.485000324
	I0830 22:19:05.536490  995192 fix.go:219] Guest: 2023-08-30 22:19:05.485000324 +0000 UTC Remote: 2023-08-30 22:19:05.401056033 +0000 UTC m=+233.468479321 (delta=83.944291ms)
	I0830 22:19:05.536524  995192 fix.go:190] guest clock delta is within tolerance: 83.944291ms
	I0830 22:19:05.536535  995192 start.go:83] releasing machines lock for "default-k8s-diff-port-791007", held for 20.510742441s
	I0830 22:19:05.536569  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.536868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:05.539651  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540017  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.540057  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540196  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540737  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540911  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540975  995192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:05.541036  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.541133  995192 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:05.541172  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.543846  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.543892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544250  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544317  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544411  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544540  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544627  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544707  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544792  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544865  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544926  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.544972  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.677442  995192 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:05.683243  995192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:05.832776  995192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:05.838924  995192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:05.839000  995192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:05.857231  995192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:05.857251  995192 start.go:466] detecting cgroup driver to use...
	I0830 22:19:05.857349  995192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:05.875107  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:05.888540  995192 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:05.888603  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:05.901129  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:05.914011  995192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:06.015763  995192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:06.144950  995192 docker.go:212] disabling docker service ...
	I0830 22:19:06.145052  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:06.159373  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:06.172560  995192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:06.279514  995192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:06.413719  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:06.427047  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:06.443765  995192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:06.443853  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.452621  995192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:06.452690  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.461365  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.470052  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.478685  995192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:06.487763  995192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:06.495483  995192 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:06.495551  995192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:06.508009  995192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:06.516397  995192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:06.615209  995192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:06.792388  995192 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:06.792466  995192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:06.798170  995192 start.go:534] Will wait 60s for crictl version
	I0830 22:19:06.798231  995192 ssh_runner.go:195] Run: which crictl
	I0830 22:19:06.801828  995192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:06.842351  995192 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:06.842459  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.898609  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.962179  995192 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:06.963711  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:06.966803  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967189  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:06.967225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967412  995192 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:06.972033  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:05.564313  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Start
	I0830 22:19:05.564511  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring networks are active...
	I0830 22:19:05.565235  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network default is active
	I0830 22:19:05.565567  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network mk-old-k8s-version-250163 is active
	I0830 22:19:05.565954  995603 main.go:141] libmachine: (old-k8s-version-250163) Getting domain xml...
	I0830 22:19:05.566644  995603 main.go:141] libmachine: (old-k8s-version-250163) Creating domain...
	I0830 22:19:06.869485  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting to get IP...
	I0830 22:19:06.870595  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:06.871071  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:06.871133  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:06.871046  996542 retry.go:31] will retry after 294.811471ms: waiting for machine to come up
	I0830 22:19:07.167657  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.168126  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.168172  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.168099  996542 retry.go:31] will retry after 376.474639ms: waiting for machine to come up
	I0830 22:19:07.546876  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.547389  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.547419  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.547354  996542 retry.go:31] will retry after 329.757182ms: waiting for machine to come up
	I0830 22:19:07.878995  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.879572  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.879601  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.879529  996542 retry.go:31] will retry after 567.335814ms: waiting for machine to come up
	I0830 22:19:08.448373  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.448996  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.449028  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.448958  996542 retry.go:31] will retry after 510.216093ms: waiting for machine to come up
	I0830 22:19:08.960855  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.961412  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.961451  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.961326  996542 retry.go:31] will retry after 688.575912ms: waiting for machine to come up
	I0830 22:19:09.651966  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:09.652379  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:09.652411  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:09.652346  996542 retry.go:31] will retry after 1.130912238s: waiting for machine to come up
	I0830 22:19:06.984632  995192 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:06.984698  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:07.020200  995192 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:07.020282  995192 ssh_runner.go:195] Run: which lz4
	I0830 22:19:07.024254  995192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:07.028470  995192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:07.028508  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 22:19:08.986852  995192 crio.go:444] Took 1.962647 seconds to copy over tarball
	I0830 22:19:08.986915  995192 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:10.784839  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:10.785424  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:10.785456  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:10.785355  996542 retry.go:31] will retry after 898.98114ms: waiting for machine to come up
	I0830 22:19:11.685890  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:11.686614  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:11.686646  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:11.686558  996542 retry.go:31] will retry after 1.621086004s: waiting for machine to come up
	I0830 22:19:13.310234  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:13.310696  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:13.310721  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:13.310630  996542 retry.go:31] will retry after 1.652651656s: waiting for machine to come up
	I0830 22:19:12.113071  995192 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.126115747s)
	I0830 22:19:12.113107  995192 crio.go:451] Took 3.126230 seconds to extract the tarball
	I0830 22:19:12.113120  995192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:12.156320  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:12.200547  995192 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:19:12.200573  995192 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:19:12.200652  995192 ssh_runner.go:195] Run: crio config
	I0830 22:19:12.273153  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:12.273180  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:12.273205  995192 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:12.273231  995192 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-791007 NodeName:default-k8s-diff-port-791007 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:19:12.273413  995192 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-791007"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:12.273497  995192 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-791007 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0830 22:19:12.273573  995192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:19:12.283536  995192 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:12.283609  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:12.292260  995192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0830 22:19:12.309407  995192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:12.325757  995192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0830 22:19:12.342664  995192 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:12.346459  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:12.358721  995192 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007 for IP: 192.168.61.104
	I0830 22:19:12.358797  995192 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:12.359010  995192 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:12.359066  995192 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:12.359147  995192 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.key
	I0830 22:19:12.359219  995192 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key.a202b4d9
	I0830 22:19:12.359255  995192 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key
	I0830 22:19:12.359363  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:12.359390  995192 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:12.359400  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:12.359424  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:12.359449  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:12.359471  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:12.359507  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:12.360328  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:12.385275  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0830 22:19:12.410697  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:12.434240  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0830 22:19:12.457206  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:12.484695  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:12.507670  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:12.531114  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:12.554501  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:12.579425  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:12.603211  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:12.628506  995192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:12.645536  995192 ssh_runner.go:195] Run: openssl version
	I0830 22:19:12.650882  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:12.660449  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665173  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665239  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.670785  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:12.681196  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:12.690775  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695204  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695262  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.700668  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:12.710205  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:12.719691  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724744  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724803  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.730472  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:12.740194  995192 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:12.744773  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:12.750633  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:12.756228  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:12.762258  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:12.767895  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:12.773716  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:12.779716  995192 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-791007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:12.779849  995192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:12.779895  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:12.808983  995192 cri.go:89] found id: ""
	I0830 22:19:12.809055  995192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:12.818188  995192 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:12.818208  995192 kubeadm.go:636] restartCluster start
	I0830 22:19:12.818258  995192 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:12.829333  995192 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.830440  995192 kubeconfig.go:92] found "default-k8s-diff-port-791007" server: "https://192.168.61.104:8444"
	I0830 22:19:12.833172  995192 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:12.841419  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.841468  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.852072  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.852092  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.852135  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.862195  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.362894  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.362981  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.374932  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.862450  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.862558  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.874629  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.363249  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.363368  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.375071  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.862656  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.862767  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.874077  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.363282  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.363389  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.374762  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.862279  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.862375  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.873942  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.362457  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.362554  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.373922  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.862336  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.862415  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.873540  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.964585  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:14.965020  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:14.965042  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:14.964995  996542 retry.go:31] will retry after 1.89297354s: waiting for machine to come up
	I0830 22:19:16.859309  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:16.859825  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:16.859852  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:16.859777  996542 retry.go:31] will retry after 2.908196896s: waiting for machine to come up
	I0830 22:19:17.363243  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.363347  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.378177  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:17.862706  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.862785  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.877394  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.363052  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.363183  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.377397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.862918  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.862995  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.878397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.362972  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.363052  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.374591  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.863153  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.863233  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.878572  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.362613  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.362703  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.374006  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.862535  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.862634  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.874066  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.362612  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.362721  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.375262  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.863011  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.863113  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.874498  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.771969  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:19.772453  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:19.772482  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:19.772410  996542 retry.go:31] will retry after 3.967899631s: waiting for machine to come up
	I0830 22:19:23.743741  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744344  995603 main.go:141] libmachine: (old-k8s-version-250163) Found IP for machine: 192.168.39.10
	I0830 22:19:23.744371  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserving static IP address...
	I0830 22:19:23.744387  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has current primary IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744827  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.744860  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserved static IP address: 192.168.39.10
	I0830 22:19:23.744877  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | skip adding static IP to network mk-old-k8s-version-250163 - found existing host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"}
	I0830 22:19:23.744904  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Getting to WaitForSSH function...
	I0830 22:19:23.744920  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting for SSH to be available...
	I0830 22:19:23.747285  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747642  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.747676  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747864  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH client type: external
	I0830 22:19:23.747896  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa (-rw-------)
	I0830 22:19:23.747935  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:23.747954  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | About to run SSH command:
	I0830 22:19:23.747971  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | exit 0
	I0830 22:19:23.836434  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:23.837035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetConfigRaw
	I0830 22:19:23.837845  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:23.840648  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.841088  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841433  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:19:23.841663  995603 machine.go:88] provisioning docker machine ...
	I0830 22:19:23.841688  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:23.841895  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842049  995603 buildroot.go:166] provisioning hostname "old-k8s-version-250163"
	I0830 22:19:23.842069  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842291  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.844953  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845376  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.845408  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845678  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.845885  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846186  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.846361  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.846839  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.846861  995603 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-250163 && echo "old-k8s-version-250163" | sudo tee /etc/hostname
	I0830 22:19:23.981507  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-250163
	
	I0830 22:19:23.981556  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.984891  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.985249  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985369  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.985604  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.985811  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.986000  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.986199  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.986603  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.986620  995603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-250163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-250163/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-250163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:24.115894  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:24.115952  995603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:24.115985  995603 buildroot.go:174] setting up certificates
	I0830 22:19:24.115996  995603 provision.go:83] configureAuth start
	I0830 22:19:24.116014  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:24.116342  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:24.118887  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119266  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.119312  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119572  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.122166  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122551  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.122590  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122700  995603 provision.go:138] copyHostCerts
	I0830 22:19:24.122769  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:24.122793  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:24.122868  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:24.122989  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:24.123004  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:24.123035  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:24.123168  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:24.123184  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:24.123217  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:24.123302  995603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-250163 san=[192.168.39.10 192.168.39.10 localhost 127.0.0.1 minikube old-k8s-version-250163]
	I0830 22:19:24.303093  995603 provision.go:172] copyRemoteCerts
	I0830 22:19:24.303156  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:24.303182  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.305900  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306173  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.306199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306352  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.306545  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.306728  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.306873  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.393858  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:24.418791  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:19:24.441090  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:24.462926  995603 provision.go:86] duration metric: configureAuth took 346.913079ms
	I0830 22:19:24.462952  995603 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:24.463136  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:19:24.463224  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.465978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466321  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.466357  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466559  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.466785  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.466934  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.467035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.467173  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.467657  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.467676  995603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:25.058077  994624 start.go:369] acquired machines lock for "no-preload-698195" in 53.768050843s
	I0830 22:19:25.058128  994624 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:25.058141  994624 fix.go:54] fixHost starting: 
	I0830 22:19:25.058564  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:25.058603  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:25.076580  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0830 22:19:25.077082  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:25.077788  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:19:25.077824  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:25.078214  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:25.078418  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:25.078695  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:19:25.080411  994624 fix.go:102] recreateIfNeeded on no-preload-698195: state=Stopped err=<nil>
	I0830 22:19:25.080447  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	W0830 22:19:25.080636  994624 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:25.082566  994624 out.go:177] * Restarting existing kvm2 VM for "no-preload-698195" ...
	I0830 22:19:24.795523  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:24.795562  995603 machine.go:91] provisioned docker machine in 953.87669ms
	I0830 22:19:24.795575  995603 start.go:300] post-start starting for "old-k8s-version-250163" (driver="kvm2")
	I0830 22:19:24.795590  995603 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:24.795616  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:24.795984  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:24.796046  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.799136  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799534  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.799561  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799797  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.799996  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.800210  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.800396  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.890335  995603 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:24.894780  995603 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:24.894807  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:24.894890  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:24.894986  995603 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:24.895110  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:24.907259  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:24.932802  995603 start.go:303] post-start completed in 137.211475ms
	I0830 22:19:24.932829  995603 fix.go:56] fixHost completed within 19.396077949s
	I0830 22:19:24.932858  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.935762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936118  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.936160  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936310  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.936538  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936721  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936918  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.937109  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.937748  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.937767  995603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:25.057876  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433965.004650095
	
	I0830 22:19:25.057911  995603 fix.go:206] guest clock: 1693433965.004650095
	I0830 22:19:25.057924  995603 fix.go:219] Guest: 2023-08-30 22:19:25.004650095 +0000 UTC Remote: 2023-08-30 22:19:24.932833395 +0000 UTC m=+145.224486267 (delta=71.8167ms)
	I0830 22:19:25.057987  995603 fix.go:190] guest clock delta is within tolerance: 71.8167ms
	I0830 22:19:25.057998  995603 start.go:83] releasing machines lock for "old-k8s-version-250163", held for 19.521294969s
	I0830 22:19:25.058036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.058351  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:25.061325  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061749  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.061782  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061965  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062635  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062921  995603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:25.062977  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.063084  995603 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:25.063119  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.065978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066217  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066375  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066428  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066620  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066668  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066784  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066806  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.066953  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.067142  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067206  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067278  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.067389  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.181017  995603 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:25.188428  995603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:25.337310  995603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:25.346144  995603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:25.346231  995603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:25.368931  995603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:25.368966  995603 start.go:466] detecting cgroup driver to use...
	I0830 22:19:25.369048  995603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:25.383524  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:25.399296  995603 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:25.399365  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:25.416387  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:25.430426  995603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:25.552861  995603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:25.699278  995603 docker.go:212] disabling docker service ...
	I0830 22:19:25.699350  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:25.718108  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:25.736420  995603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:25.871165  995603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:25.993674  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:26.009215  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:26.027014  995603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0830 22:19:26.027122  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.038902  995603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:26.038985  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.051908  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.062635  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.073049  995603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:26.086514  995603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:26.098352  995603 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:26.098405  995603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:26.117326  995603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:26.129854  995603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:26.259656  995603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:26.476938  995603 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:26.477034  995603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:26.482773  995603 start.go:534] Will wait 60s for crictl version
	I0830 22:19:26.482841  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:26.486853  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:26.525498  995603 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:26.525595  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.585226  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.641386  995603 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0830 22:19:22.362364  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:22.362448  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:22.373701  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:22.842449  995192 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:22.842531  995192 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:22.842551  995192 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:22.842623  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:22.871557  995192 cri.go:89] found id: ""
	I0830 22:19:22.871624  995192 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:22.886295  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:22.894486  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:22.894549  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902556  995192 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902578  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.017775  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.631493  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.831074  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.923222  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.994499  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:23.994583  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.007515  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.519195  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.019167  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.519068  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.019708  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.519664  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.547751  995192 api_server.go:72] duration metric: took 2.553248139s to wait for apiserver process to appear ...
	I0830 22:19:26.547794  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:26.547816  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:25.084008  994624 main.go:141] libmachine: (no-preload-698195) Calling .Start
	I0830 22:19:25.084189  994624 main.go:141] libmachine: (no-preload-698195) Ensuring networks are active...
	I0830 22:19:25.085011  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network default is active
	I0830 22:19:25.085319  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network mk-no-preload-698195 is active
	I0830 22:19:25.085676  994624 main.go:141] libmachine: (no-preload-698195) Getting domain xml...
	I0830 22:19:25.086427  994624 main.go:141] libmachine: (no-preload-698195) Creating domain...
	I0830 22:19:26.443042  994624 main.go:141] libmachine: (no-preload-698195) Waiting to get IP...
	I0830 22:19:26.444179  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.444691  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.444784  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.444686  996676 retry.go:31] will retry after 208.17912ms: waiting for machine to come up
	I0830 22:19:26.654132  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.654621  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.654651  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.654581  996676 retry.go:31] will retry after 304.249592ms: waiting for machine to come up
	I0830 22:19:26.960205  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.960990  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.961014  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.960912  996676 retry.go:31] will retry after 342.108913ms: waiting for machine to come up
	I0830 22:19:27.304766  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.305661  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.305700  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.305602  996676 retry.go:31] will retry after 500.147687ms: waiting for machine to come up
	I0830 22:19:27.808375  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.808867  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.808884  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.808796  996676 retry.go:31] will retry after 562.543443ms: waiting for machine to come up
	I0830 22:19:28.373420  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:28.373912  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:28.373938  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:28.373863  996676 retry.go:31] will retry after 755.787662ms: waiting for machine to come up
	I0830 22:19:26.642985  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:26.646304  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646712  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:26.646773  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646957  995603 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:26.652439  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:26.667339  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:19:26.667418  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:26.703670  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:26.703750  995603 ssh_runner.go:195] Run: which lz4
	I0830 22:19:26.708087  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:26.712329  995603 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:26.712362  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0830 22:19:28.602303  995603 crio.go:444] Took 1.894253 seconds to copy over tarball
	I0830 22:19:28.602408  995603 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:30.838763  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.838807  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:30.838824  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:30.908950  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.908987  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:31.409372  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.420411  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.420480  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:31.909095  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.916778  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.916813  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:29.130983  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:29.131530  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:29.131565  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:29.131459  996676 retry.go:31] will retry after 951.657872ms: waiting for machine to come up
	I0830 22:19:30.084853  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:30.085280  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:30.085306  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:30.085247  996676 retry.go:31] will retry after 1.469099841s: waiting for machine to come up
	I0830 22:19:31.556432  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:31.556893  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:31.556918  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:31.556809  996676 retry.go:31] will retry after 1.217757948s: waiting for machine to come up
	I0830 22:19:32.775796  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:32.776120  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:32.776152  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:32.776080  996676 retry.go:31] will retry after 2.032727742s: waiting for machine to come up
	I0830 22:19:31.859924  995603 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.257478408s)
	I0830 22:19:31.859957  995603 crio.go:451] Took 3.257622 seconds to extract the tarball
	I0830 22:19:31.859970  995603 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:31.917027  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:31.965752  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:31.965777  995603 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:31.965886  995603 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.965944  995603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.965980  995603 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.965879  995603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.966084  995603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.965878  995603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.965967  995603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:31.965901  995603 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968024  995603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.968045  995603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.968079  995603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.968186  995603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.968191  995603 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.968193  995603 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968248  995603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.968766  995603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.140478  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.140975  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.157997  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0830 22:19:32.159468  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.159950  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.160033  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.161682  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.255481  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:32.261235  995603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0830 22:19:32.261291  995603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.261340  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.282724  995603 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0830 22:19:32.282781  995603 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.282854  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378268  995603 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0830 22:19:32.378372  995603 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0830 22:19:32.378417  995603 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0830 22:19:32.378507  995603 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.378551  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378377  995603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0830 22:19:32.378578  995603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0830 22:19:32.378591  995603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.378600  995603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.378295  995603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378632  995603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.378439  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378657  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.468864  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.468935  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.469002  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.469032  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.469123  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.469183  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0830 22:19:32.469184  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.563508  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0830 22:19:32.563630  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0830 22:19:32.586962  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0830 22:19:32.587044  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0830 22:19:32.587059  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0830 22:19:32.587115  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0830 22:19:32.587208  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.587265  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0830 22:19:32.592221  995603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0830 22:19:32.592246  995603 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.592300  995603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0830 22:19:34.254194  995603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.661863162s)
	I0830 22:19:34.254235  995603 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0830 22:19:34.254281  995603 cache_images.go:92] LoadImages completed in 2.288490025s
	W0830 22:19:34.254418  995603 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0830 22:19:34.254514  995603 ssh_runner.go:195] Run: crio config
	I0830 22:19:34.338842  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.338876  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.338903  995603 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:34.338929  995603 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-250163 NodeName:old-k8s-version-250163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0830 22:19:34.339134  995603 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-250163"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-250163
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.10:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:34.339240  995603 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-250163 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:19:34.339313  995603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0830 22:19:34.348990  995603 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:34.349076  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:34.358084  995603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0830 22:19:34.376989  995603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:34.396552  995603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0830 22:19:34.416666  995603 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:34.421910  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:34.436393  995603 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163 for IP: 192.168.39.10
	I0830 22:19:34.436490  995603 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:34.436717  995603 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:34.436774  995603 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:34.436867  995603 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.key
	I0830 22:19:34.436944  995603 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key.713efbbe
	I0830 22:19:34.437006  995603 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key
	I0830 22:19:34.437140  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:34.437187  995603 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:34.437203  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:34.437249  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:34.437284  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:34.437320  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:34.437388  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:34.438079  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:34.470943  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:19:34.503477  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:34.533783  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:19:34.562423  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:34.594418  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:34.625417  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:34.657444  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:34.689407  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:34.719004  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:34.745856  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:32.410110  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.418241  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.418269  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:32.910053  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.915839  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.915870  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.410086  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.488115  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.488161  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.909647  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.915252  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.915284  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.409978  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.418957  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:34.418995  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.909561  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.925400  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:19:34.938760  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:19:34.938793  995192 api_server.go:131] duration metric: took 8.390990557s to wait for apiserver health ...
	I0830 22:19:34.938804  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.938813  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.941052  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:34.942805  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:34.967544  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:34.998450  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:35.012600  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:35.012681  995192 system_pods.go:61] "coredns-5dd5756b68-992p2" [83ad338b-0338-45c3-a5ed-f772d100046b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:35.012702  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [4ed4f652-47c4-4d79-b8a8-dd0cc778bce0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:19:35.012714  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [c01b9dfc-ad6f-4348-abf0-fde4a64bfa98] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:19:35.012732  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [94cbccaf-3d5a-480c-8ee0-b8af5030909d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:19:35.012748  995192 system_pods.go:61] "kube-proxy-vckmf" [03f05466-f99b-4803-9164-233bfb9e7bb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:19:35.012760  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [2c5e190d-c93b-400a-8538-e31cc2016cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:19:35.012774  995192 system_pods.go:61] "metrics-server-57f55c9bc5-p8pp2" [4eaff1be-4258-427b-a110-47dabbffecee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:19:35.012788  995192 system_pods.go:61] "storage-provisioner" [8db3da8b-8256-405d-8d9c-79fdb6da8ab2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:19:35.012800  995192 system_pods.go:74] duration metric: took 14.324835ms to wait for pod list to return data ...
	I0830 22:19:35.012814  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:35.024186  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:35.024216  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:35.024229  995192 node_conditions.go:105] duration metric: took 11.409776ms to run NodePressure ...
	I0830 22:19:35.024284  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:35.318824  995192 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324484  995192 kubeadm.go:787] kubelet initialised
	I0830 22:19:35.324512  995192 kubeadm.go:788] duration metric: took 5.656923ms waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324525  995192 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:19:35.334137  995192 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:34.810276  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:34.810797  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:34.810836  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:34.810732  996676 retry.go:31] will retry after 2.550508742s: waiting for machine to come up
	I0830 22:19:37.364002  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:37.364550  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:37.364582  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:37.364489  996676 retry.go:31] will retry after 2.230782644s: waiting for machine to come up
	I0830 22:19:34.771235  995603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:34.787672  995603 ssh_runner.go:195] Run: openssl version
	I0830 22:19:34.793400  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:34.803208  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808108  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808166  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.814296  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:34.824791  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:34.838527  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844726  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844789  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.852442  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:34.862510  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:34.875456  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880581  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880702  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.886591  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:34.897133  995603 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:34.902292  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:34.908905  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:34.915276  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:34.921204  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:34.927878  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:34.934091  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:34.940851  995603 kubeadm.go:404] StartCluster: {Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:34.940966  995603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:34.941036  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:34.978950  995603 cri.go:89] found id: ""
	I0830 22:19:34.979038  995603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:34.988290  995603 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:34.988324  995603 kubeadm.go:636] restartCluster start
	I0830 22:19:34.988403  995603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:34.998277  995603 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:34.999385  995603 kubeconfig.go:92] found "old-k8s-version-250163" server: "https://192.168.39.10:8443"
	I0830 22:19:35.002017  995603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:35.013903  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.013962  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.028780  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.028800  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.028845  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.043243  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.543986  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.544109  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.555939  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.044164  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.044259  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.055496  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.544110  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.544243  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.555999  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.043535  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.043628  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.055019  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.543435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.543546  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.558778  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.044367  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.044482  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.058777  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.543327  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.543431  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.555133  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.043720  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.043874  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.059955  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.543461  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.543625  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.558707  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.360241  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:39.363755  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:40.357373  995192 pod_ready.go:92] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:40.357396  995192 pod_ready.go:81] duration metric: took 5.023230161s waiting for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:40.357409  995192 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:39.597197  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:39.597650  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:39.597684  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:39.597603  996676 retry.go:31] will retry after 3.562835127s: waiting for machine to come up
	I0830 22:19:43.161572  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:43.162020  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:43.162054  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:43.161973  996676 retry.go:31] will retry after 5.409514109s: waiting for machine to come up
	I0830 22:19:40.044014  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.044104  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.059377  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:40.543910  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.544012  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.555295  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.043380  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.043493  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.055443  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.544046  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.544121  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.555832  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.043785  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.043876  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.054809  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.543376  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.543463  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.554254  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.043435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.043543  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.054734  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.544308  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.544418  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.555603  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.044211  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.044291  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.055403  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.544013  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.544117  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.555197  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.378396  995192 pod_ready.go:102] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:42.881428  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.881456  995192 pod_ready.go:81] duration metric: took 2.524040213s waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.881467  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892688  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.892718  995192 pod_ready.go:81] duration metric: took 11.243576ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892731  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898434  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.898463  995192 pod_ready.go:81] duration metric: took 5.721888ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898476  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904261  995192 pod_ready.go:92] pod "kube-proxy-vckmf" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.904287  995192 pod_ready.go:81] duration metric: took 5.803127ms waiting for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904299  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153736  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:43.153763  995192 pod_ready.go:81] duration metric: took 249.454932ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153777  995192 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:45.462667  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:48.575718  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576172  994624 main.go:141] libmachine: (no-preload-698195) Found IP for machine: 192.168.72.28
	I0830 22:19:48.576206  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has current primary IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576217  994624 main.go:141] libmachine: (no-preload-698195) Reserving static IP address...
	I0830 22:19:48.576671  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.576719  994624 main.go:141] libmachine: (no-preload-698195) Reserved static IP address: 192.168.72.28
	I0830 22:19:48.576754  994624 main.go:141] libmachine: (no-preload-698195) DBG | skip adding static IP to network mk-no-preload-698195 - found existing host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"}
	I0830 22:19:48.576776  994624 main.go:141] libmachine: (no-preload-698195) DBG | Getting to WaitForSSH function...
	I0830 22:19:48.576792  994624 main.go:141] libmachine: (no-preload-698195) Waiting for SSH to be available...
	I0830 22:19:48.578953  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579261  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.579290  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579398  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH client type: external
	I0830 22:19:48.579417  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa (-rw-------)
	I0830 22:19:48.579451  994624 main.go:141] libmachine: (no-preload-698195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:48.579478  994624 main.go:141] libmachine: (no-preload-698195) DBG | About to run SSH command:
	I0830 22:19:48.579493  994624 main.go:141] libmachine: (no-preload-698195) DBG | exit 0
	I0830 22:19:48.679834  994624 main.go:141] libmachine: (no-preload-698195) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:48.680237  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetConfigRaw
	I0830 22:19:48.681064  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.683388  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.683844  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.683884  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.684153  994624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/config.json ...
	I0830 22:19:48.684435  994624 machine.go:88] provisioning docker machine ...
	I0830 22:19:48.684462  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:48.684708  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.684851  994624 buildroot.go:166] provisioning hostname "no-preload-698195"
	I0830 22:19:48.684883  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.685013  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.687508  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.687975  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.688018  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.688198  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.688413  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688599  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688830  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.689061  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.689695  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.689718  994624 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-698195 && echo "no-preload-698195" | sudo tee /etc/hostname
	I0830 22:19:45.014985  995603 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:45.015030  995603 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:45.015045  995603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:45.015102  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:45.049952  995603 cri.go:89] found id: ""
	I0830 22:19:45.050039  995603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:45.065202  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:45.074198  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:45.074330  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083407  995603 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083438  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:45.211527  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.256339  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.044735651s)
	I0830 22:19:46.256389  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.469714  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.542945  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.644533  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:46.644632  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:46.659432  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.182415  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.682613  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.182661  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.206336  995603 api_server.go:72] duration metric: took 1.561801361s to wait for apiserver process to appear ...
	I0830 22:19:48.206374  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:48.206399  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:50.136893  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 1m0.108561967s
	I0830 22:19:50.136941  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:50.136952  994705 fix.go:54] fixHost starting: 
	I0830 22:19:50.137347  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:50.137386  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:50.156678  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0830 22:19:50.157148  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:50.157739  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:19:50.157765  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:50.158103  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:50.158283  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.158445  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:19:50.160098  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Running err=<nil>
	W0830 22:19:50.160115  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:50.162162  994705 out.go:177] * Updating the running kvm2 "embed-certs-208903" VM ...
	I0830 22:19:50.163634  994705 machine.go:88] provisioning docker machine ...
	I0830 22:19:50.163663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.163906  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164077  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:19:50.164104  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164288  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.166831  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167198  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.167234  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167371  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.167561  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167731  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167902  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.168108  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.168592  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.168610  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:19:50.306738  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:19:50.306772  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.309523  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.309929  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.309974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.310182  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.310349  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310638  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310845  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.311027  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.311610  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.311644  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:50.433972  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:50.434005  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:50.434045  994705 buildroot.go:174] setting up certificates
	I0830 22:19:50.434057  994705 provision.go:83] configureAuth start
	I0830 22:19:50.434069  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.434388  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:19:50.437450  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.437883  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.437916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.438115  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.440654  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.441059  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441213  994705 provision.go:138] copyHostCerts
	I0830 22:19:50.441271  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:50.441283  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:50.441352  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:50.441453  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:50.441462  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:50.441481  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:50.441563  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:50.441575  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:50.441606  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:50.441684  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:19:50.721978  994705 provision.go:172] copyRemoteCerts
	I0830 22:19:50.722039  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:50.722072  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.724893  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725257  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.725289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725571  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.725799  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.726014  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.726181  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:19:50.817217  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:50.843335  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:50.869233  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:50.897508  994705 provision.go:86] duration metric: configureAuth took 463.432948ms
	I0830 22:19:50.897544  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:50.897804  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:50.897904  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.900633  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.901040  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901210  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.901404  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901547  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901680  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.901875  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.902287  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.902310  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:51.128816  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.128855  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:19:51.128866  994705 machine.go:91] provisioned docker machine in 965.212906ms
	I0830 22:19:51.128900  994705 fix.go:56] fixHost completed within 991.948899ms
	I0830 22:19:51.128906  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 991.990648ms
	W0830 22:19:51.129050  994705 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p embed-certs-208903" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.131823  994705 out.go:177] 
	W0830 22:19:51.133957  994705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0830 22:19:51.133985  994705 out.go:239] * 
	W0830 22:19:51.134788  994705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:19:51.136212  994705 out.go:177] 
	I0830 22:19:48.842387  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-698195
	
	I0830 22:19:48.842438  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.845727  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.846100  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.846140  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.846429  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.846658  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.846856  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.846991  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.847159  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.847578  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.847601  994624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-698195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-698195/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-698195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:48.994130  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:48.994176  994624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:48.994211  994624 buildroot.go:174] setting up certificates
	I0830 22:19:48.994244  994624 provision.go:83] configureAuth start
	I0830 22:19:48.994270  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.994612  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.997772  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.998170  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.998208  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.998416  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.001089  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.001466  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.001498  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.001639  994624 provision.go:138] copyHostCerts
	I0830 22:19:49.001702  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:49.001733  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:49.001808  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:49.001927  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:49.001937  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:49.001967  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:49.002042  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:49.002057  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:49.002085  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:49.002169  994624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.no-preload-698195 san=[192.168.72.28 192.168.72.28 localhost 127.0.0.1 minikube no-preload-698195]
	I0830 22:19:49.376465  994624 provision.go:172] copyRemoteCerts
	I0830 22:19:49.376534  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:49.376565  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.379932  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.380313  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.380354  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.380486  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.380738  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.380949  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.381109  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:49.474102  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:49.496563  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:49.518034  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:49.539392  994624 provision.go:86] duration metric: configureAuth took 545.126518ms
	I0830 22:19:49.539419  994624 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:49.539623  994624 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:49.539719  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.542336  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.542665  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.542738  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.542839  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.543026  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.543217  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.543341  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.543459  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:49.543864  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:49.543882  994624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:49.869021  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:49.869051  994624 machine.go:91] provisioned docker machine in 1.184598655s
	I0830 22:19:49.869065  994624 start.go:300] post-start starting for "no-preload-698195" (driver="kvm2")
	I0830 22:19:49.869079  994624 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:49.869110  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:49.869444  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:49.869481  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.871931  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.872288  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.872333  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.872502  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.872706  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.872888  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.873027  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:49.969286  994624 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:49.973513  994624 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:49.973532  994624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:49.973598  994624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:49.973671  994624 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:49.973768  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:49.982880  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:50.006097  994624 start.go:303] post-start completed in 137.016363ms
	I0830 22:19:50.006124  994624 fix.go:56] fixHost completed within 24.947983055s
	I0830 22:19:50.006150  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.008513  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.008880  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.008908  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.009134  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.009371  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.009560  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.009755  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.009933  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.010372  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:50.010402  994624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:50.136709  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433990.121404659
	
	I0830 22:19:50.136738  994624 fix.go:206] guest clock: 1693433990.121404659
	I0830 22:19:50.136748  994624 fix.go:219] Guest: 2023-08-30 22:19:50.121404659 +0000 UTC Remote: 2023-08-30 22:19:50.006128322 +0000 UTC m=+361.306139641 (delta=115.276337ms)
	I0830 22:19:50.136792  994624 fix.go:190] guest clock delta is within tolerance: 115.276337ms
	I0830 22:19:50.136800  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 25.078698183s
	I0830 22:19:50.136834  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.137143  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:50.139834  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.140214  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.140249  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.140387  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.140890  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.141088  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.141191  994624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:50.141243  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.141312  994624 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:50.141335  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.144030  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144283  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144434  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.144462  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144598  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.144736  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.144768  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144791  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.144912  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.144987  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.145152  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.145161  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:50.145318  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.145433  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:50.257719  994624 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:50.263507  994624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:50.411574  994624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:50.418796  994624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:50.418872  994624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:50.435922  994624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:50.435943  994624 start.go:466] detecting cgroup driver to use...
	I0830 22:19:50.436022  994624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:50.450969  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:50.463538  994624 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:50.463596  994624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:50.475797  994624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:50.488143  994624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:50.586327  994624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:50.697497  994624 docker.go:212] disabling docker service ...
	I0830 22:19:50.697587  994624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:50.712369  994624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:50.726039  994624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:50.840596  994624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:50.967799  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:50.984629  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:51.006331  994624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:51.006404  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.017150  994624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:51.017241  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.028714  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.040075  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.054520  994624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:51.067179  994624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:51.077610  994624 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:51.077685  994624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:51.093337  994624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:51.104110  994624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:51.243534  994624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:51.455149  994624 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:51.455232  994624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:51.462110  994624 start.go:534] Will wait 60s for crictl version
	I0830 22:19:51.462185  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:51.468872  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:51.509838  994624 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:51.509924  994624 ssh_runner.go:195] Run: crio --version
	I0830 22:19:51.562065  994624 ssh_runner.go:195] Run: crio --version
	I0830 22:19:51.630813  994624 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:47.961668  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:50.461541  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:51.632256  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:51.636020  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:51.636430  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:51.636458  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:51.636633  994624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:51.641003  994624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:51.655539  994624 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:51.655595  994624 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:51.691423  994624 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:51.691455  994624 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:51.691508  994624 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.691795  994624 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.691800  994624 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.691932  994624 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.692015  994624 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.692204  994624 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.692383  994624 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0830 22:19:51.693156  994624 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.693256  994624 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.693294  994624 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.693393  994624 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.693613  994624 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.693700  994624 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0830 22:19:51.693767  994624 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.694704  994624 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.695502  994624 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.858227  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.862141  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.862588  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.864659  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.872937  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0830 22:19:51.885087  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.912710  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.970615  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.978831  994624 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0830 22:19:51.978883  994624 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.978930  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.004057  994624 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0830 22:19:52.004112  994624 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:52.004153  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.031261  994624 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0830 22:19:52.031330  994624 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:52.031350  994624 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0830 22:19:52.031393  994624 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:52.031456  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.031394  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168753  994624 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0830 22:19:52.168817  994624 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:52.168842  994624 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0830 22:19:52.168760  994624 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0830 22:19:52.168882  994624 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:52.168906  994624 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:52.168931  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168944  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168948  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:52.168877  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168988  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:52.169048  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:52.169156  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:52.218220  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0830 22:19:52.218353  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.235432  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:52.235565  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0830 22:19:52.235575  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:52.235692  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:52.246243  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:52.246437  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0830 22:19:52.246550  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:52.260976  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0830 22:19:52.261024  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0830 22:19:52.261041  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.261090  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.261090  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:19:52.262450  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0830 22:19:52.316437  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0830 22:19:52.316556  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:19:52.316709  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0830 22:19:52.316807  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:19:52.330026  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0830 22:19:52.330185  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0830 22:19:52.330318  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:19:53.207917  995603 api_server.go:269] stopped: https://192.168.39.10:8443/healthz: Get "https://192.168.39.10:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0830 22:19:53.207968  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:54.224442  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:54.224482  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:54.724967  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:54.732845  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0830 22:19:54.732880  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0830 22:19:55.224677  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:55.231265  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0830 22:19:55.231302  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0830 22:19:55.725325  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:55.731785  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0830 22:19:55.739996  995603 api_server.go:141] control plane version: v1.16.0
	I0830 22:19:55.740025  995603 api_server.go:131] duration metric: took 7.533643458s to wait for apiserver health ...
	I0830 22:19:55.740037  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:55.740046  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:55.742083  995603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:52.462806  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:54.462856  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:56.962847  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:55.697808  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (3.436622341s)
	I0830 22:19:55.697847  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0830 22:19:55.697882  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1: (3.381312107s)
	I0830 22:19:55.697895  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0830 22:19:55.697927  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (3.436796784s)
	I0830 22:19:55.697959  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0830 22:19:55.697985  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1: (3.381155963s)
	I0830 22:19:55.698014  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0830 22:19:55.697989  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:55.698035  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.367694611s)
	I0830 22:19:55.698065  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0830 22:19:55.698072  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:57.158231  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.460131868s)
	I0830 22:19:57.158266  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0830 22:19:57.158302  994624 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:57.158371  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:55.743724  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:55.755829  995603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:55.777604  995603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:55.792182  995603 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:55.792221  995603 system_pods.go:61] "coredns-5644d7b6d9-872nn" [acd3b375-2486-48c3-9032-6386a091128a] Running
	I0830 22:19:55.792232  995603 system_pods.go:61] "coredns-5644d7b6d9-lqn5v" [48a574c1-b546-4060-9aba-1e2bcdaf7541] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:55.792240  995603 system_pods.go:61] "etcd-old-k8s-version-250163" [8d4eb3c4-a10b-4803-a1cd-28199081480d] Running
	I0830 22:19:55.792247  995603 system_pods.go:61] "kube-apiserver-old-k8s-version-250163" [c2cb0944-0836-4419-9bcf-8b6ddcb8de4f] Running
	I0830 22:19:55.792253  995603 system_pods.go:61] "kube-controller-manager-old-k8s-version-250163" [953d90e1-21ec-47a8-916a-9641616443a3] Running
	I0830 22:19:55.792259  995603 system_pods.go:61] "kube-proxy-qg82w" [58c1bd37-de42-46db-8337-cad3969dbbe3] Running
	I0830 22:19:55.792265  995603 system_pods.go:61] "kube-scheduler-old-k8s-version-250163" [ead115ca-3faa-457a-a29d-6de753bf53ab] Running
	I0830 22:19:55.792271  995603 system_pods.go:61] "storage-provisioner" [e481c13c-17b5-4a76-8f56-01decf4d2dde] Running
	I0830 22:19:55.792278  995603 system_pods.go:74] duration metric: took 14.654143ms to wait for pod list to return data ...
	I0830 22:19:55.792291  995603 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:55.800500  995603 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:55.800529  995603 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:55.800541  995603 node_conditions.go:105] duration metric: took 8.245305ms to run NodePressure ...
	I0830 22:19:55.800572  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:56.165598  995603 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:56.173177  995603 retry.go:31] will retry after 155.771258ms: kubelet not initialised
	I0830 22:19:56.335243  995603 retry.go:31] will retry after 435.88083ms: kubelet not initialised
	I0830 22:19:56.900108  995603 retry.go:31] will retry after 318.649581ms: kubelet not initialised
	I0830 22:19:57.226618  995603 retry.go:31] will retry after 906.607144ms: kubelet not initialised
	I0830 22:19:58.169644  995603 retry.go:31] will retry after 1.480507319s: kubelet not initialised
	I0830 22:19:59.662899  995603 retry.go:31] will retry after 1.43965579s: kubelet not initialised
	I0830 22:19:59.462944  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:01.463843  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:01.109412  995603 retry.go:31] will retry after 2.769965791s: kubelet not initialised
	I0830 22:20:03.884087  995603 retry.go:31] will retry after 5.524462984s: kubelet not initialised
	I0830 22:20:03.962393  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:06.463083  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:03.920494  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.762089682s)
	I0830 22:20:03.920528  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0830 22:20:03.920563  994624 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:20:03.920618  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:20:05.471647  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.551002795s)
	I0830 22:20:05.471696  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0830 22:20:05.471725  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:20:05.471808  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:20:07.437922  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.966087689s)
	I0830 22:20:07.437952  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0830 22:20:07.437986  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:20:07.438046  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:20:09.418426  995603 retry.go:31] will retry after 8.161662984s: kubelet not initialised
	I0830 22:20:08.961616  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:10.962062  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:09.894897  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.456819743s)
	I0830 22:20:09.894932  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0830 22:20:09.895001  994624 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:20:09.895072  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:20:10.848591  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0830 22:20:10.848635  994624 cache_images.go:123] Successfully loaded all cached images
	I0830 22:20:10.848641  994624 cache_images.go:92] LoadImages completed in 19.157171696s
	I0830 22:20:10.848726  994624 ssh_runner.go:195] Run: crio config
	I0830 22:20:10.912483  994624 cni.go:84] Creating CNI manager for ""
	I0830 22:20:10.912514  994624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:20:10.912545  994624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:20:10.912574  994624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.28 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-698195 NodeName:no-preload-698195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:20:10.912729  994624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-698195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:20:10.912793  994624 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-698195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-698195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:20:10.912850  994624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:20:10.922383  994624 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:20:10.922470  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:20:10.931904  994624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0830 22:20:10.947603  994624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:20:10.963835  994624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0830 22:20:10.982645  994624 ssh_runner.go:195] Run: grep 192.168.72.28	control-plane.minikube.internal$ /etc/hosts
	I0830 22:20:10.986493  994624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:20:10.999967  994624 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195 for IP: 192.168.72.28
	I0830 22:20:11.000000  994624 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:11.000190  994624 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:20:11.000252  994624 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:20:11.000348  994624 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.key
	I0830 22:20:11.000455  994624 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.key.f951a290
	I0830 22:20:11.000518  994624 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.key
	I0830 22:20:11.000668  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:20:11.000712  994624 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:20:11.000728  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:20:11.000844  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:20:11.000881  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:20:11.000917  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:20:11.000978  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:20:11.001876  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:20:11.025256  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:20:11.048414  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:20:11.072696  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:20:11.097029  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:20:11.123653  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:20:11.152564  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:20:11.180885  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:20:11.204194  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:20:11.227365  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:20:11.249804  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:20:11.272563  994624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:20:11.289225  994624 ssh_runner.go:195] Run: openssl version
	I0830 22:20:11.295235  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:20:11.304745  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.309554  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.309615  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.314775  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:20:11.327372  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:20:11.338944  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.344731  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.344797  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.350242  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:20:11.359913  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:20:11.369367  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.373467  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.373511  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.378731  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:20:11.387877  994624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:20:11.392496  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:20:11.398057  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:20:11.403555  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:20:11.409343  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:20:11.414914  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:20:11.420465  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:20:11.425887  994624 kubeadm.go:404] StartCluster: {Name:no-preload-698195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-698195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:20:11.425988  994624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:20:11.426031  994624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:20:11.458215  994624 cri.go:89] found id: ""
	I0830 22:20:11.458307  994624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:20:11.468981  994624 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:20:11.469010  994624 kubeadm.go:636] restartCluster start
	I0830 22:20:11.469068  994624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:20:11.478113  994624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:11.479707  994624 kubeconfig.go:92] found "no-preload-698195" server: "https://192.168.72.28:8443"
	I0830 22:20:11.483097  994624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:20:11.492068  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:11.492123  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:11.502752  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:11.502766  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:11.502803  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:11.514139  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:12.014881  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:12.014982  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:12.027078  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:12.514591  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:12.514686  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:12.529329  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.014971  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:13.015068  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:13.026874  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.514310  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:13.514395  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:13.526406  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.461372  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:15.961535  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:14.014646  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:14.014750  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:14.026467  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:14.515116  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:14.515212  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:14.527110  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:15.014622  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:15.014713  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:15.026083  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:15.515205  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:15.515304  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:15.530248  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:16.014368  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:16.014472  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:16.025785  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:16.514315  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:16.514390  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:16.525823  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.014305  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:17.014410  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:17.025657  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.515255  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:17.515331  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:17.527967  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:18.014524  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:18.014603  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:18.025912  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:18.514454  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:18.514533  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:18.526034  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.586022  995603 retry.go:31] will retry after 7.910874514s: kubelet not initialised
	I0830 22:20:18.460574  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:20.460727  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:19.014477  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:19.014563  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:19.025688  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:19.514231  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:19.514318  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:19.526253  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:20.014551  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:20.014632  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:20.026223  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:20.515044  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:20.515142  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:20.526336  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:21.014933  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:21.015017  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:21.026315  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:21.492708  994624 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:20:21.492739  994624 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:20:21.492755  994624 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:20:21.492837  994624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:20:21.528882  994624 cri.go:89] found id: ""
	I0830 22:20:21.528979  994624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:20:21.545258  994624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:20:21.554325  994624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:20:21.554387  994624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:20:21.563086  994624 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:20:21.563121  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:21.688507  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.342362  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.552586  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.618512  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.699936  994624 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:20:22.700029  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:22.715983  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:23.231090  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:23.730985  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:22.462833  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:24.462913  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:26.960795  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:24.230937  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:24.730685  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:25.230888  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:25.256876  994624 api_server.go:72] duration metric: took 2.556939469s to wait for apiserver process to appear ...
	I0830 22:20:25.256907  994624 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:20:25.256929  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:25.502804  995603 retry.go:31] will retry after 19.65596925s: kubelet not initialised
	I0830 22:20:28.908329  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:20:28.908366  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:20:28.908382  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:28.973483  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:20:28.973534  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:20:29.474026  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:29.480796  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:20:29.480850  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:20:29.974406  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:29.981421  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:20:29.981453  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:20:30.474452  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:30.479311  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0830 22:20:30.490550  994624 api_server.go:141] control plane version: v1.28.1
	I0830 22:20:30.490581  994624 api_server.go:131] duration metric: took 5.233664737s to wait for apiserver health ...
	I0830 22:20:30.490621  994624 cni.go:84] Creating CNI manager for ""
	I0830 22:20:30.490634  994624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:20:30.492834  994624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:20:28.962577  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:31.461661  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:30.494469  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:20:30.508611  994624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:20:30.536470  994624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:20:30.547285  994624 system_pods.go:59] 8 kube-system pods found
	I0830 22:20:30.547321  994624 system_pods.go:61] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:20:30.547339  994624 system_pods.go:61] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:20:30.547352  994624 system_pods.go:61] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:20:30.547361  994624 system_pods.go:61] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:20:30.547369  994624 system_pods.go:61] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:20:30.547379  994624 system_pods.go:61] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:20:30.547391  994624 system_pods.go:61] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:20:30.547405  994624 system_pods.go:61] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:20:30.547416  994624 system_pods.go:74] duration metric: took 10.921869ms to wait for pod list to return data ...
	I0830 22:20:30.547428  994624 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:20:30.550787  994624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:20:30.550816  994624 node_conditions.go:123] node cpu capacity is 2
	I0830 22:20:30.550828  994624 node_conditions.go:105] duration metric: took 3.391486ms to run NodePressure ...
	I0830 22:20:30.550856  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:30.786117  994624 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:20:30.793653  994624 kubeadm.go:787] kubelet initialised
	I0830 22:20:30.793680  994624 kubeadm.go:788] duration metric: took 7.533543ms waiting for restarted kubelet to initialise ...
	I0830 22:20:30.793694  994624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:30.800474  994624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.808844  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.808869  994624 pod_ready.go:81] duration metric: took 8.371156ms waiting for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.808879  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.808888  994624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.823461  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "etcd-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.823487  994624 pod_ready.go:81] duration metric: took 14.590789ms waiting for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.823497  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "etcd-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.823504  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.834123  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-apiserver-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.834150  994624 pod_ready.go:81] duration metric: took 10.63758ms waiting for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.834158  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-apiserver-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.834164  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.951589  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.951620  994624 pod_ready.go:81] duration metric: took 117.448834ms waiting for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.951628  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.951635  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:31.343471  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-proxy-5fjvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.343497  994624 pod_ready.go:81] duration metric: took 391.855831ms waiting for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:31.343506  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-proxy-5fjvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.343512  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:31.741491  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-scheduler-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.741527  994624 pod_ready.go:81] duration metric: took 398.007277ms waiting for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:31.741539  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-scheduler-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.741555  994624 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:32.141918  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:32.141952  994624 pod_ready.go:81] duration metric: took 400.379332ms waiting for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:32.141961  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:32.141969  994624 pod_ready.go:38] duration metric: took 1.348263054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:32.141987  994624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:20:32.153800  994624 ops.go:34] apiserver oom_adj: -16
	I0830 22:20:32.153828  994624 kubeadm.go:640] restartCluster took 20.684809572s
	I0830 22:20:32.153848  994624 kubeadm.go:406] StartCluster complete in 20.727972693s
	I0830 22:20:32.153868  994624 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:32.153955  994624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:20:32.155765  994624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:32.156054  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:20:32.156162  994624 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:20:32.156265  994624 addons.go:69] Setting storage-provisioner=true in profile "no-preload-698195"
	I0830 22:20:32.156285  994624 addons.go:231] Setting addon storage-provisioner=true in "no-preload-698195"
	I0830 22:20:32.156288  994624 addons.go:69] Setting default-storageclass=true in profile "no-preload-698195"
	I0830 22:20:32.156307  994624 addons.go:69] Setting metrics-server=true in profile "no-preload-698195"
	I0830 22:20:32.156344  994624 addons.go:231] Setting addon metrics-server=true in "no-preload-698195"
	I0830 22:20:32.156318  994624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-698195"
	I0830 22:20:32.156396  994624 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	W0830 22:20:32.156293  994624 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:20:32.156512  994624 host.go:66] Checking if "no-preload-698195" exists ...
	W0830 22:20:32.156358  994624 addons.go:240] addon metrics-server should already be in state true
	I0830 22:20:32.156570  994624 host.go:66] Checking if "no-preload-698195" exists ...
	I0830 22:20:32.156821  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156847  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.156849  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156867  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.156948  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156961  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.165443  994624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-698195" context rescaled to 1 replicas
	I0830 22:20:32.165497  994624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:20:32.167715  994624 out.go:177] * Verifying Kubernetes components...
	I0830 22:20:32.169310  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:20:32.176341  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0830 22:20:32.176876  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0830 22:20:32.177070  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0830 22:20:32.177253  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177447  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177562  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177829  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.177856  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.178014  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.178032  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.178387  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179460  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179499  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.179517  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.179897  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179957  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.179996  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.180272  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.180293  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.180423  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.201009  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0830 22:20:32.201548  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.201926  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0830 22:20:32.202180  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.202200  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.202304  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.202785  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.202842  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.202865  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.203052  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.203202  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.203391  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.204424  994624 addons.go:231] Setting addon default-storageclass=true in "no-preload-698195"
	W0830 22:20:32.204450  994624 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:20:32.204491  994624 host.go:66] Checking if "no-preload-698195" exists ...
	I0830 22:20:32.204897  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.204931  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.205076  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.207516  994624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:20:32.206126  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.209336  994624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:20:32.210840  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:20:32.209276  994624 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:20:32.210862  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:20:32.210877  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:20:32.210890  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.210897  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.214370  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214385  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214769  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.214813  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.214829  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214841  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.215131  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.215199  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.215346  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.215387  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.215521  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.215580  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.215651  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.215748  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.244173  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I0830 22:20:32.244664  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.245311  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.245343  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.245760  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.246361  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.246416  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.263737  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0830 22:20:32.264177  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.264737  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.264761  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.265106  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.265342  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.266996  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.267406  994624 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:20:32.267430  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:20:32.267454  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.270345  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.270799  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.270829  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.271021  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.271215  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.271403  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.271526  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.362089  994624 node_ready.go:35] waiting up to 6m0s for node "no-preload-698195" to be "Ready" ...
	I0830 22:20:32.362281  994624 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0830 22:20:32.371216  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:20:32.372220  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:20:32.372240  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:20:32.396916  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:20:32.396942  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:20:32.417651  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:20:32.430668  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:20:32.430699  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:20:32.476147  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:20:33.655453  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.284190116s)
	I0830 22:20:33.655495  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.237806074s)
	I0830 22:20:33.655515  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655532  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.655519  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655602  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.655854  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.655875  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.655885  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655894  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656045  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656082  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656095  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656115  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656160  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656169  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656180  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.656195  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656394  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656432  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656437  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656455  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.656465  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656729  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656741  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656754  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.802947  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326756295s)
	I0830 22:20:33.802994  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.803016  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.803349  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.803371  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.803381  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.803391  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.803393  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.803632  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.803682  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.803700  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.803720  994624 addons.go:467] Verifying addon metrics-server=true in "no-preload-698195"
	I0830 22:20:33.805489  994624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:20:33.462414  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:35.961487  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:33.806934  994624 addons.go:502] enable addons completed in 1.650789204s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:20:34.550814  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:36.551274  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:38.551355  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:37.963175  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:40.462510  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:39.550464  994624 node_ready.go:49] node "no-preload-698195" has status "Ready":"True"
	I0830 22:20:39.550505  994624 node_ready.go:38] duration metric: took 7.188369926s waiting for node "no-preload-698195" to be "Ready" ...
	I0830 22:20:39.550516  994624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:39.556533  994624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.562470  994624 pod_ready.go:92] pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:39.562498  994624 pod_ready.go:81] duration metric: took 5.934964ms waiting for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.562511  994624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.568348  994624 pod_ready.go:92] pod "etcd-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:39.568371  994624 pod_ready.go:81] duration metric: took 5.853085ms waiting for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.568380  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:41.593857  994624 pod_ready.go:102] pod "kube-apiserver-no-preload-698195" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:42.594544  994624 pod_ready.go:92] pod "kube-apiserver-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.594572  994624 pod_ready.go:81] duration metric: took 3.026185311s waiting for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.594586  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.599820  994624 pod_ready.go:92] pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.599844  994624 pod_ready.go:81] duration metric: took 5.249213ms waiting for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.599856  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.751073  994624 pod_ready.go:92] pod "kube-proxy-5fjvd" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.751096  994624 pod_ready.go:81] duration metric: took 151.233562ms waiting for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.751105  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:43.150620  994624 pod_ready.go:92] pod "kube-scheduler-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:43.150646  994624 pod_ready.go:81] duration metric: took 399.535323ms waiting for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:43.150656  994624 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.464235  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:44.960831  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:46.961923  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:45.458489  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:47.958322  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:45.165236  995603 kubeadm.go:787] kubelet initialised
	I0830 22:20:45.165261  995603 kubeadm.go:788] duration metric: took 48.999634631s waiting for restarted kubelet to initialise ...
	I0830 22:20:45.165269  995603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:45.170939  995603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.176235  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.176259  995603 pod_ready.go:81] duration metric: took 5.296469ms waiting for pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.176271  995603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.180703  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.180718  995603 pod_ready.go:81] duration metric: took 4.44114ms waiting for pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.180725  995603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.185225  995603 pod_ready.go:92] pod "etcd-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.185244  995603 pod_ready.go:81] duration metric: took 4.512736ms waiting for pod "etcd-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.185255  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.190403  995603 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.190425  995603 pod_ready.go:81] duration metric: took 5.162774ms waiting for pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.190436  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.564427  995603 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.564460  995603 pod_ready.go:81] duration metric: took 374.00421ms waiting for pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.564473  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qg82w" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.964836  995603 pod_ready.go:92] pod "kube-proxy-qg82w" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.964857  995603 pod_ready.go:81] duration metric: took 400.377393ms waiting for pod "kube-proxy-qg82w" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.964866  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:46.364023  995603 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:46.364046  995603 pod_ready.go:81] duration metric: took 399.172301ms waiting for pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:46.364060  995603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:48.672124  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:48.962198  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.461425  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:49.958485  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.959424  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.170855  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:53.172690  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:53.962708  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:56.461729  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:54.458026  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:56.458124  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:58.459811  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:55.669393  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:57.670454  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:59.670654  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:58.463098  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:00.962495  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:00.960274  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:03.457998  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:02.170872  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:04.670725  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:03.460674  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:05.461496  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:05.459727  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:07.959179  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:06.671066  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.169869  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:07.463765  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.961943  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.959351  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:12.458921  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:11.171435  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:13.171597  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:12.461881  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:14.961416  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:14.459572  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:16.960064  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:15.670176  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:18.170049  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:17.460985  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:19.462323  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:21.963325  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:19.459085  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:21.460169  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:20.671600  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:23.169931  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:24.464683  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:26.962740  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:23.958014  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:26.458502  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:28.458654  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:25.670985  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:28.171321  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:29.461798  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:31.961714  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:30.464431  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:32.958557  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:30.669588  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:32.670695  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.671313  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.463531  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:36.960658  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.960256  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:37.460047  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:37.168958  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:39.170995  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:38.961145  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:40.961870  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:39.958213  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:41.958373  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:41.670302  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:44.171198  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:43.461666  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:45.461738  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:44.459123  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:46.459226  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:48.459428  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:46.670708  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:48.671826  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:47.462306  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:49.462771  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:51.962010  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:50.958149  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:52.958493  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:51.169610  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:53.170386  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:54.461116  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:56.959735  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:54.959069  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:57.458784  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:55.172123  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:57.670323  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:59.671985  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:58.961225  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:00.961822  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:59.959058  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:01.959700  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:02.170880  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:04.171473  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:02.961938  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:05.461758  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:03.960213  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:06.458196  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:08.458500  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:06.671998  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:09.169979  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:07.962031  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:10.460716  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:10.960753  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:13.459638  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:11.669885  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:13.670821  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:12.461433  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:14.463156  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:16.961558  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:15.459765  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:17.959192  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:15.671350  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:18.170569  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:19.462375  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:21.961785  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:19.959308  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:22.457592  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:20.173424  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:22.671008  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:23.961985  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:25.962149  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:24.458343  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:26.958471  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:25.169264  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:27.181579  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:29.670923  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:27.964954  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:30.461530  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:29.458262  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:31.463334  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:32.171662  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:34.670239  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:32.961287  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:34.961787  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:33.957827  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:35.958367  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:37.960259  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:36.671642  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:39.169834  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:37.462107  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:39.961576  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:41.961773  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:40.458367  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:42.458710  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:41.671303  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:44.170994  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:43.964448  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.461777  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:44.958652  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.960005  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.171108  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:48.670866  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:48.462315  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:50.462456  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:49.459011  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:51.958137  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:51.170020  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:53.171135  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:52.462694  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:54.962055  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:53.958728  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:55.959278  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:57.959555  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:55.671421  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:58.169881  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:57.461322  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:59.461865  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:01.963541  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:00.458148  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:02.458834  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:00.170265  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:02.170719  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:04.670111  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:03.967458  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:05.972793  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:04.958722  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:07.458954  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:06.670434  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:08.671269  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:08.461195  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:10.961859  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:09.458999  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:11.958146  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:11.170482  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.670156  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.462648  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.463851  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.958659  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.962293  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:18.458707  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.670647  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:18.170462  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:17.960881  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:19.962032  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:20.959370  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:23.459653  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:20.670329  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:23.169817  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:22.461024  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:24.461537  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:26.960897  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:25.958696  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:28.459488  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:25.671024  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:28.170228  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:29.461009  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:31.461891  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:30.958318  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:32.958723  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:30.170683  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:32.670966  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:33.462005  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:35.960841  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:34.959278  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.458068  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:35.170093  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.671411  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.961501  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:40.460893  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:39.458824  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:41.461623  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:40.170169  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:42.670892  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:42.461840  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:43.154742  995192 pod_ready.go:81] duration metric: took 4m0.000931927s waiting for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	E0830 22:23:43.154776  995192 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:23:43.154798  995192 pod_ready.go:38] duration metric: took 4m7.830262728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:23:43.154853  995192 kubeadm.go:640] restartCluster took 4m30.336637887s
	W0830 22:23:43.154961  995192 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0830 22:23:43.155001  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0830 22:23:43.959940  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:46.458406  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:45.170898  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:47.670457  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:48.957451  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:51.457818  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:50.171371  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:52.171468  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:54.670175  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:53.958105  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:56.458176  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:57.169990  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:59.177173  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:58.957583  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:00.958404  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:02.958866  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:01.670484  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:03.671368  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:05.457466  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:07.457893  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:05.671480  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:08.170128  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:09.458376  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:11.958335  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:10.171221  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:12.171398  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:14.171694  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:14.432406  995192 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.277378744s)
	I0830 22:24:14.432498  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:14.446038  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:24:14.455354  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:24:14.464292  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:24:14.464332  995192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 22:24:14.680764  995192 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:24:13.965662  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:16.460984  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:16.171891  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:18.671072  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:18.958205  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:20.959096  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:23.459244  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:20.671733  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:22.671947  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:24.677772  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:24.927380  995192 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 22:24:24.927462  995192 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:24:24.927559  995192 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:24:24.927697  995192 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:24:24.927843  995192 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:24:24.927938  995192 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:24:24.929775  995192 out.go:204]   - Generating certificates and keys ...
	I0830 22:24:24.929895  995192 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:24:24.930004  995192 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:24:24.930118  995192 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 22:24:24.930202  995192 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0830 22:24:24.930321  995192 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0830 22:24:24.930408  995192 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0830 22:24:24.930485  995192 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0830 22:24:24.930559  995192 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0830 22:24:24.930658  995192 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 22:24:24.930756  995192 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 22:24:24.930821  995192 kubeadm.go:322] [certs] Using the existing "sa" key
	I0830 22:24:24.930922  995192 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:24:24.931009  995192 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:24:24.931077  995192 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:24:24.931170  995192 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:24:24.931245  995192 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:24:24.931354  995192 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:24:24.931430  995192 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:24:24.934341  995192 out.go:204]   - Booting up control plane ...
	I0830 22:24:24.934422  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:24:24.934524  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:24:24.934580  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:24:24.934689  995192 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:24:24.934770  995192 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:24:24.934809  995192 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:24:24.934936  995192 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:24:24.935014  995192 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003378 seconds
	I0830 22:24:24.935150  995192 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:24:24.935261  995192 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:24:24.935317  995192 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:24:24.935490  995192 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-791007 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 22:24:24.935540  995192 kubeadm.go:322] [bootstrap-token] Using token: 3t39h1.cgypp2756rpdn3ql
	I0830 22:24:24.937035  995192 out.go:204]   - Configuring RBAC rules ...
	I0830 22:24:24.937140  995192 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:24:24.937246  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 22:24:24.937428  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:24:24.937619  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:24:24.937762  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:24:24.937883  995192 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:24:24.938044  995192 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 22:24:24.938105  995192 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:24:24.938178  995192 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:24:24.938197  995192 kubeadm.go:322] 
	I0830 22:24:24.938277  995192 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:24:24.938290  995192 kubeadm.go:322] 
	I0830 22:24:24.938389  995192 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:24:24.938398  995192 kubeadm.go:322] 
	I0830 22:24:24.938429  995192 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:24:24.938506  995192 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:24:24.938577  995192 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:24:24.938586  995192 kubeadm.go:322] 
	I0830 22:24:24.938658  995192 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 22:24:24.938681  995192 kubeadm.go:322] 
	I0830 22:24:24.938745  995192 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 22:24:24.938754  995192 kubeadm.go:322] 
	I0830 22:24:24.938825  995192 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:24:24.938930  995192 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:24:24.939065  995192 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:24:24.939076  995192 kubeadm.go:322] 
	I0830 22:24:24.939160  995192 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 22:24:24.939266  995192 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:24:24.939280  995192 kubeadm.go:322] 
	I0830 22:24:24.939367  995192 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 3t39h1.cgypp2756rpdn3ql \
	I0830 22:24:24.939452  995192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:24:24.939473  995192 kubeadm.go:322] 	--control-plane 
	I0830 22:24:24.939479  995192 kubeadm.go:322] 
	I0830 22:24:24.939597  995192 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:24:24.939606  995192 kubeadm.go:322] 
	I0830 22:24:24.939685  995192 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 3t39h1.cgypp2756rpdn3ql \
	I0830 22:24:24.939848  995192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:24:24.939880  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:24:24.939916  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:24:24.942544  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:24:24.943961  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:24:24.990449  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:24:25.040966  995192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:24:25.041042  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.041041  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=default-k8s-diff-port-791007 minikube.k8s.io/updated_at=2023_08_30T22_24_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.441321  995192 ops.go:34] apiserver oom_adj: -16
	I0830 22:24:25.441492  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.557357  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:26.163303  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:26.663721  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.459794  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.957287  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.171894  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:29.671326  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.163474  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:27.664036  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:28.163187  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:28.663338  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.163719  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.663846  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:30.163288  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:30.663346  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:31.163165  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:31.663996  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.958583  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:31.960227  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:31.671923  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:34.171143  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:32.163631  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:32.663347  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:33.163634  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:33.663228  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:34.163600  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:34.663994  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:35.163597  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:35.663419  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:36.163764  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:36.663168  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:37.163646  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:37.663613  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:38.163643  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:38.264223  995192 kubeadm.go:1081] duration metric: took 13.22324453s to wait for elevateKubeSystemPrivileges.
	I0830 22:24:38.264262  995192 kubeadm.go:406] StartCluster complete in 5m25.484553135s
	I0830 22:24:38.264286  995192 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:24:38.264411  995192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:24:38.266553  995192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:24:38.266800  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:24:38.266990  995192 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:24:38.267105  995192 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267117  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:24:38.267126  995192 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.267141  995192 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:24:38.267163  995192 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267184  995192 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267209  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.267214  995192 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.267234  995192 addons.go:240] addon metrics-server should already be in state true
	I0830 22:24:38.267207  995192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-791007"
	I0830 22:24:38.267330  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.267664  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267735  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.267806  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267797  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267851  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.267866  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.285812  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0830 22:24:38.286287  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.287008  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.287036  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.287384  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0830 22:24:38.287488  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I0830 22:24:38.287526  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.287808  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.287949  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.288154  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.288200  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.288370  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.288516  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.288582  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.288562  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.288947  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.289135  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.289343  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.289569  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.289610  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.299364  995192 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.299392  995192 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:24:38.299422  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.299824  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.299861  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.305325  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0830 22:24:38.305834  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.306214  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I0830 22:24:38.306525  995192 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-791007" context rescaled to 1 replicas
	I0830 22:24:38.306561  995192 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:24:38.308424  995192 out.go:177] * Verifying Kubernetes components...
	I0830 22:24:38.306646  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.306688  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.309840  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:38.309911  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.310245  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.310362  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.310381  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.310433  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.310801  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.310980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.312319  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.314072  995192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:24:38.313018  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.315723  995192 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:24:38.315742  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:24:38.315759  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.317188  995192 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:24:34.457685  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:36.458268  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.459052  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:36.171434  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.173228  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.318441  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:24:38.318465  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:24:38.318488  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.319537  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.320338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.320365  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.320640  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.321238  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.321431  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.321733  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.322284  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.322691  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.322778  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.322887  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.323058  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.323205  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.323265  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.328412  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0830 22:24:38.328853  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.329468  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.329479  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.329898  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.330379  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.330395  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.345318  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0830 22:24:38.345781  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.346309  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.346329  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.346665  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.346886  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.348620  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.348922  995192 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:24:38.348941  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:24:38.348961  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.351758  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.352206  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.352233  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.352357  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.352562  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.352787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.352918  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.474078  995192 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-791007" to be "Ready" ...
	I0830 22:24:38.474205  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:24:38.479269  995192 node_ready.go:49] node "default-k8s-diff-port-791007" has status "Ready":"True"
	I0830 22:24:38.479294  995192 node_ready.go:38] duration metric: took 5.181356ms waiting for node "default-k8s-diff-port-791007" to be "Ready" ...
	I0830 22:24:38.479305  995192 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:38.486715  995192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ck692" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:38.508419  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:24:38.508443  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:24:38.515075  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:24:38.532789  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:24:38.549460  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:24:38.549488  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:24:38.593580  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:24:38.593614  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:24:38.637965  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:24:40.093211  995192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.618968297s)
	I0830 22:24:40.093259  995192 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0830 22:24:40.526723  995192 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ck692" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ck692" not found
	I0830 22:24:40.526748  995192 pod_ready.go:81] duration metric: took 2.040009497s waiting for pod "coredns-5dd5756b68-ck692" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:40.526757  995192 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ck692" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ck692" not found
	I0830 22:24:40.526765  995192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:40.552258  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.037149365s)
	I0830 22:24:40.552312  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019488451s)
	I0830 22:24:40.552317  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552381  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552351  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552696  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.552714  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.552724  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552734  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552891  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.552902  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.552918  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552927  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.553018  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.553114  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553132  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.553170  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.553202  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553210  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.553219  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.553225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.553478  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553493  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.776628  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.138598233s)
	I0830 22:24:40.776714  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.776731  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.777199  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.777224  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.777246  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.777256  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.777270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.777546  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.777626  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.777647  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.777667  995192 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-791007"
	I0830 22:24:40.779719  995192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:24:40.781134  995192 addons.go:502] enable addons completed in 2.51415908s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:24:40.459185  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:42.958731  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.150847  994624 pod_ready.go:81] duration metric: took 4m0.000170406s waiting for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:43.150881  994624 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:24:43.150893  994624 pod_ready.go:38] duration metric: took 4m3.600363648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:43.150919  994624 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:24:43.150964  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:43.151043  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:43.199383  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:43.199412  994624 cri.go:89] found id: ""
	I0830 22:24:43.199420  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:43.199479  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.204289  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:43.204371  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:43.247303  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:43.247329  994624 cri.go:89] found id: ""
	I0830 22:24:43.247340  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:43.247396  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.252955  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:43.253024  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:43.286292  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:43.286318  994624 cri.go:89] found id: ""
	I0830 22:24:43.286327  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:43.286386  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.290585  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:43.290653  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:43.323616  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:43.323645  994624 cri.go:89] found id: ""
	I0830 22:24:43.323655  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:43.323729  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.328256  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:43.328326  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:43.363566  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:43.363595  994624 cri.go:89] found id: ""
	I0830 22:24:43.363605  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:43.363666  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.368006  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:43.368067  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:43.403728  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:43.403752  994624 cri.go:89] found id: ""
	I0830 22:24:43.403761  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:43.403833  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.407957  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:43.408020  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:43.438864  994624 cri.go:89] found id: ""
	I0830 22:24:43.438893  994624 logs.go:284] 0 containers: []
	W0830 22:24:43.438903  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:43.438911  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:43.438976  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:43.478905  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:43.478935  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:43.478942  994624 cri.go:89] found id: ""
	I0830 22:24:43.478951  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:43.479015  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.486919  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.496040  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:43.496070  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:43.669727  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:43.669764  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:43.712471  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:43.712508  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:43.746949  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:43.746988  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:42.573674  995192 pod_ready.go:92] pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.573706  995192 pod_ready.go:81] duration metric: took 2.046935361s waiting for pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.573716  995192 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.579433  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.579450  995192 pod_ready.go:81] duration metric: took 5.72841ms waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.579458  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.584499  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.584519  995192 pod_ready.go:81] duration metric: took 5.055504ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.584527  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.678045  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.678071  995192 pod_ready.go:81] duration metric: took 93.537153ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.678084  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbdvk" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.082548  995192 pod_ready.go:92] pod "kube-proxy-bbdvk" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:43.082576  995192 pod_ready.go:81] duration metric: took 404.485397ms waiting for pod "kube-proxy-bbdvk" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.082585  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.479813  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:43.479840  995192 pod_ready.go:81] duration metric: took 397.248046ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.479851  995192 pod_ready.go:38] duration metric: took 5.000533366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:43.479872  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:24:43.479956  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:24:43.498558  995192 api_server.go:72] duration metric: took 5.191959207s to wait for apiserver process to appear ...
	I0830 22:24:43.498583  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:24:43.498603  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:24:43.504260  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:24:43.505566  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:24:43.505589  995192 api_server.go:131] duration metric: took 6.997863ms to wait for apiserver health ...
	I0830 22:24:43.505598  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:24:43.682798  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:24:43.682837  995192 system_pods.go:61] "coredns-5dd5756b68-jwn87" [984f4b65-9261-4952-a368-5fac2fa14bd7] Running
	I0830 22:24:43.682846  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [156cdcfd-fa81-4542-8506-18b3ab61f725] Running
	I0830 22:24:43.682856  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [841dcf3a-9ab5-4fbf-a20a-4179d4a793fd] Running
	I0830 22:24:43.682863  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [4cef1264-90fb-47fc-a155-4cb267c961aa] Running
	I0830 22:24:43.682870  995192 system_pods.go:61] "kube-proxy-bbdvk" [dd98a34a-f2f9-4e73-a751-e68a1addb89f] Running
	I0830 22:24:43.682876  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [11bf5dce-8d54-4029-a9d2-423e278b6181] Running
	I0830 22:24:43.682887  995192 system_pods.go:61] "metrics-server-57f55c9bc5-dllmg" [6826d918-a2ac-4744-8145-f6d7599499af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:43.682897  995192 system_pods.go:61] "storage-provisioner" [fb41168e-19d2-4b57-a2fb-ab0b3d0ff836] Running
	I0830 22:24:43.682909  995192 system_pods.go:74] duration metric: took 177.304345ms to wait for pod list to return data ...
	I0830 22:24:43.682919  995192 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:24:43.878616  995192 default_sa.go:45] found service account: "default"
	I0830 22:24:43.878643  995192 default_sa.go:55] duration metric: took 195.70884ms for default service account to be created ...
	I0830 22:24:43.878654  995192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:24:44.083123  995192 system_pods.go:86] 8 kube-system pods found
	I0830 22:24:44.083155  995192 system_pods.go:89] "coredns-5dd5756b68-jwn87" [984f4b65-9261-4952-a368-5fac2fa14bd7] Running
	I0830 22:24:44.083161  995192 system_pods.go:89] "etcd-default-k8s-diff-port-791007" [156cdcfd-fa81-4542-8506-18b3ab61f725] Running
	I0830 22:24:44.083165  995192 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-791007" [841dcf3a-9ab5-4fbf-a20a-4179d4a793fd] Running
	I0830 22:24:44.083170  995192 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-791007" [4cef1264-90fb-47fc-a155-4cb267c961aa] Running
	I0830 22:24:44.083177  995192 system_pods.go:89] "kube-proxy-bbdvk" [dd98a34a-f2f9-4e73-a751-e68a1addb89f] Running
	I0830 22:24:44.083181  995192 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-791007" [11bf5dce-8d54-4029-a9d2-423e278b6181] Running
	I0830 22:24:44.083187  995192 system_pods.go:89] "metrics-server-57f55c9bc5-dllmg" [6826d918-a2ac-4744-8145-f6d7599499af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:44.083194  995192 system_pods.go:89] "storage-provisioner" [fb41168e-19d2-4b57-a2fb-ab0b3d0ff836] Running
	I0830 22:24:44.083203  995192 system_pods.go:126] duration metric: took 204.542978ms to wait for k8s-apps to be running ...
	I0830 22:24:44.083216  995192 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:24:44.083297  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:44.098110  995192 system_svc.go:56] duration metric: took 14.88196ms WaitForService to wait for kubelet.
	I0830 22:24:44.098143  995192 kubeadm.go:581] duration metric: took 5.7915497s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:24:44.098211  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:24:44.278751  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:24:44.278802  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:24:44.278814  995192 node_conditions.go:105] duration metric: took 180.597923ms to run NodePressure ...
	I0830 22:24:44.278825  995192 start.go:228] waiting for startup goroutines ...
	I0830 22:24:44.278831  995192 start.go:233] waiting for cluster config update ...
	I0830 22:24:44.278841  995192 start.go:242] writing updated cluster config ...
	I0830 22:24:44.279208  995192 ssh_runner.go:195] Run: rm -f paused
	I0830 22:24:44.332074  995192 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:24:44.334502  995192 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-791007" cluster and "default" namespace by default
	I0830 22:24:40.672327  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.171136  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.780116  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:43.780147  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:43.824462  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:43.824494  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:43.875847  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:43.875881  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:43.937533  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:43.937582  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:43.950917  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:43.950948  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:43.989236  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:43.989265  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:44.025171  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:44.025218  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:44.644566  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:44.644609  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:44.692321  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:44.692356  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:47.229304  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:24:47.252442  994624 api_server.go:72] duration metric: took 4m15.086891336s to wait for apiserver process to appear ...
	I0830 22:24:47.252476  994624 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:24:47.252521  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:47.252593  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:47.286367  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:47.286397  994624 cri.go:89] found id: ""
	I0830 22:24:47.286410  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:47.286461  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.290812  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:47.290883  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:47.324349  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:47.324376  994624 cri.go:89] found id: ""
	I0830 22:24:47.324386  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:47.324440  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.329002  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:47.329072  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:47.362954  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:47.362985  994624 cri.go:89] found id: ""
	I0830 22:24:47.362996  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:47.363062  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.367498  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:47.367587  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:47.398450  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:47.398478  994624 cri.go:89] found id: ""
	I0830 22:24:47.398489  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:47.398550  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.402646  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:47.402741  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:47.438663  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:47.438691  994624 cri.go:89] found id: ""
	I0830 22:24:47.438701  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:47.438769  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.443046  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:47.443114  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:47.472698  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:47.472725  994624 cri.go:89] found id: ""
	I0830 22:24:47.472733  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:47.472792  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.477075  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:47.477150  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:47.507099  994624 cri.go:89] found id: ""
	I0830 22:24:47.507138  994624 logs.go:284] 0 containers: []
	W0830 22:24:47.507148  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:47.507157  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:47.507232  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:47.540635  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:47.540661  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:47.540667  994624 cri.go:89] found id: ""
	I0830 22:24:47.540676  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:47.540734  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.545274  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.549659  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:47.549681  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:47.605419  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:47.605460  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:47.646819  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:47.646856  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:47.684772  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:47.684801  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:47.731741  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:47.731791  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:47.762713  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:47.762745  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:48.266510  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:48.266557  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:48.315124  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:48.315164  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:48.332407  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:48.332447  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:48.463670  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:48.463710  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:48.498034  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:48.498067  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:48.528326  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:48.528372  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:48.563858  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:48.563893  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:45.670559  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:46.364206  995603 pod_ready.go:81] duration metric: took 4m0.000126235s waiting for pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:46.364246  995603 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:24:46.364267  995603 pod_ready.go:38] duration metric: took 4m1.19899008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:46.364298  995603 kubeadm.go:640] restartCluster took 5m11.375966766s
	W0830 22:24:46.364364  995603 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0830 22:24:46.364394  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0830 22:24:51.095064  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:24:51.106674  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0830 22:24:51.108320  994624 api_server.go:141] control plane version: v1.28.1
	I0830 22:24:51.108339  994624 api_server.go:131] duration metric: took 3.855856321s to wait for apiserver health ...
	I0830 22:24:51.108347  994624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:24:51.108375  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:51.108422  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:51.140030  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:51.140059  994624 cri.go:89] found id: ""
	I0830 22:24:51.140069  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:51.140133  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.144302  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:51.144375  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:51.181915  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:51.181944  994624 cri.go:89] found id: ""
	I0830 22:24:51.181953  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:51.182007  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.187104  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:51.187171  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:51.220763  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:51.220794  994624 cri.go:89] found id: ""
	I0830 22:24:51.220806  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:51.220890  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.225368  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:51.225443  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:51.263131  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:51.263155  994624 cri.go:89] found id: ""
	I0830 22:24:51.263164  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:51.263231  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.268531  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:51.268586  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:51.307119  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:51.307145  994624 cri.go:89] found id: ""
	I0830 22:24:51.307154  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:51.307224  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.311914  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:51.311988  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:51.341363  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:51.341391  994624 cri.go:89] found id: ""
	I0830 22:24:51.341402  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:51.341461  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.345501  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:51.345570  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:51.378276  994624 cri.go:89] found id: ""
	I0830 22:24:51.378311  994624 logs.go:284] 0 containers: []
	W0830 22:24:51.378322  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:51.378329  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:51.378398  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:51.416207  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:51.416228  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:51.416232  994624 cri.go:89] found id: ""
	I0830 22:24:51.416245  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:51.416295  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.421034  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.424911  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:51.424938  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:51.458543  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:51.458576  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:51.489189  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:51.489223  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:52.074879  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:52.074924  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:52.091316  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:52.091357  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:52.131564  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:52.131602  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:52.168850  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:52.168879  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:52.200329  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:52.200358  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:52.230767  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:52.230799  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:52.276139  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:52.276177  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:52.330487  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:52.330523  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:52.469305  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:52.469336  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:52.536395  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:52.536432  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:55.089149  994624 system_pods.go:59] 8 kube-system pods found
	I0830 22:24:55.089184  994624 system_pods.go:61] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running
	I0830 22:24:55.089194  994624 system_pods.go:61] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running
	I0830 22:24:55.089198  994624 system_pods.go:61] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running
	I0830 22:24:55.089203  994624 system_pods.go:61] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running
	I0830 22:24:55.089207  994624 system_pods.go:61] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running
	I0830 22:24:55.089211  994624 system_pods.go:61] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running
	I0830 22:24:55.089217  994624 system_pods.go:61] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:55.089224  994624 system_pods.go:61] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running
	I0830 22:24:55.089230  994624 system_pods.go:74] duration metric: took 3.980877363s to wait for pod list to return data ...
	I0830 22:24:55.089237  994624 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:24:55.091833  994624 default_sa.go:45] found service account: "default"
	I0830 22:24:55.091862  994624 default_sa.go:55] duration metric: took 2.618667ms for default service account to be created ...
	I0830 22:24:55.091871  994624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:24:55.098108  994624 system_pods.go:86] 8 kube-system pods found
	I0830 22:24:55.098145  994624 system_pods.go:89] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running
	I0830 22:24:55.098154  994624 system_pods.go:89] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running
	I0830 22:24:55.098163  994624 system_pods.go:89] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running
	I0830 22:24:55.098179  994624 system_pods.go:89] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running
	I0830 22:24:55.098190  994624 system_pods.go:89] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running
	I0830 22:24:55.098201  994624 system_pods.go:89] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running
	I0830 22:24:55.098212  994624 system_pods.go:89] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:55.098233  994624 system_pods.go:89] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running
	I0830 22:24:55.098241  994624 system_pods.go:126] duration metric: took 6.364144ms to wait for k8s-apps to be running ...
	I0830 22:24:55.098250  994624 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:24:55.098297  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:55.114382  994624 system_svc.go:56] duration metric: took 16.118629ms WaitForService to wait for kubelet.
	I0830 22:24:55.114413  994624 kubeadm.go:581] duration metric: took 4m22.94887118s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:24:55.114435  994624 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:24:55.118227  994624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:24:55.118256  994624 node_conditions.go:123] node cpu capacity is 2
	I0830 22:24:55.118272  994624 node_conditions.go:105] duration metric: took 3.832437ms to run NodePressure ...
	I0830 22:24:55.118287  994624 start.go:228] waiting for startup goroutines ...
	I0830 22:24:55.118295  994624 start.go:233] waiting for cluster config update ...
	I0830 22:24:55.118309  994624 start.go:242] writing updated cluster config ...
	I0830 22:24:55.118611  994624 ssh_runner.go:195] Run: rm -f paused
	I0830 22:24:55.169756  994624 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:24:55.172028  994624 out.go:177] * Done! kubectl is now configured to use "no-preload-698195" cluster and "default" namespace by default
	I0830 22:25:09.359961  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (22.995525599s)
	I0830 22:25:09.360040  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:25:09.375757  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:25:09.385118  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:25:09.394601  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:25:09.394640  995603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0830 22:25:09.454824  995603 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0830 22:25:09.455022  995603 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:25:09.599893  995603 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:25:09.600055  995603 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:25:09.600213  995603 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:25:09.783920  995603 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:25:09.784082  995603 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:25:09.793193  995603 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0830 22:25:09.902777  995603 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:25:09.904820  995603 out.go:204]   - Generating certificates and keys ...
	I0830 22:25:09.904937  995603 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:25:09.905035  995603 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:25:09.905150  995603 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 22:25:09.905241  995603 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0830 22:25:09.905350  995603 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0830 22:25:09.905423  995603 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0830 22:25:09.905540  995603 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0830 22:25:09.905622  995603 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0830 22:25:09.905799  995603 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 22:25:09.905918  995603 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 22:25:09.905978  995603 kubeadm.go:322] [certs] Using the existing "sa" key
	I0830 22:25:09.906052  995603 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:25:10.141265  995603 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:25:10.238428  995603 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:25:10.387118  995603 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:25:10.620307  995603 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:25:10.625802  995603 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:25:10.627926  995603 out.go:204]   - Booting up control plane ...
	I0830 22:25:10.629866  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:25:10.635839  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:25:10.638800  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:25:10.641079  995603 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:25:10.666312  995603 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:25:20.671894  995603 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004868 seconds
	I0830 22:25:20.672078  995603 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:25:20.687003  995603 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:25:21.215417  995603 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:25:21.215657  995603 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-250163 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0830 22:25:21.726398  995603 kubeadm.go:322] [bootstrap-token] Using token: y3ik1i.subqwfsto1ck6o9y
	I0830 22:25:21.728095  995603 out.go:204]   - Configuring RBAC rules ...
	I0830 22:25:21.728243  995603 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:25:21.735828  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:25:21.741247  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:25:21.744588  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:25:21.747966  995603 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:25:21.835002  995603 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:25:22.157106  995603 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:25:22.157129  995603 kubeadm.go:322] 
	I0830 22:25:22.157207  995603 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:25:22.157221  995603 kubeadm.go:322] 
	I0830 22:25:22.157343  995603 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:25:22.157373  995603 kubeadm.go:322] 
	I0830 22:25:22.157410  995603 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:25:22.157493  995603 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:25:22.157572  995603 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:25:22.157581  995603 kubeadm.go:322] 
	I0830 22:25:22.157661  995603 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:25:22.157779  995603 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:25:22.157877  995603 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:25:22.157894  995603 kubeadm.go:322] 
	I0830 22:25:22.158002  995603 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0830 22:25:22.158104  995603 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:25:22.158119  995603 kubeadm.go:322] 
	I0830 22:25:22.158250  995603 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y3ik1i.subqwfsto1ck6o9y \
	I0830 22:25:22.158415  995603 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:25:22.158457  995603 kubeadm.go:322]     --control-plane 	  
	I0830 22:25:22.158467  995603 kubeadm.go:322] 
	I0830 22:25:22.158555  995603 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:25:22.158566  995603 kubeadm.go:322] 
	I0830 22:25:22.158674  995603 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y3ik1i.subqwfsto1ck6o9y \
	I0830 22:25:22.158820  995603 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:25:22.159148  995603 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:25:22.159192  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:25:22.159205  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:25:22.160970  995603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:25:22.162353  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:25:22.173835  995603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:25:22.192193  995603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:25:22.192332  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=old-k8s-version-250163 minikube.k8s.io/updated_at=2023_08_30T22_25_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.192335  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.440832  995603 ops.go:34] apiserver oom_adj: -16
	I0830 22:25:22.441067  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.560349  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:23.171762  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:23.671955  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:24.171789  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:24.671863  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:25.172176  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:25.672262  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:26.172348  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:26.672680  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:27.171856  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:27.671722  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:28.171712  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:28.671959  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:29.171914  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:29.672320  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:30.171688  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:30.671958  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:31.172481  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:31.672528  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:32.172583  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:32.672562  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:33.171839  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:33.672125  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:34.172515  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:34.672643  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:35.172469  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:35.672444  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:36.171897  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:36.672260  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:37.171900  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:37.332591  995603 kubeadm.go:1081] duration metric: took 15.140354535s to wait for elevateKubeSystemPrivileges.
	I0830 22:25:37.332635  995603 kubeadm.go:406] StartCluster complete in 6m2.391789918s
	I0830 22:25:37.332659  995603 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:25:37.332770  995603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:25:37.334722  995603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:25:37.334991  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:25:37.335087  995603 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:25:37.335217  995603 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335241  995603 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-250163"
	W0830 22:25:37.335253  995603 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:25:37.335313  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.335317  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:25:37.335322  995603 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335342  995603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-250163"
	I0830 22:25:37.335345  995603 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335380  995603 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-250163"
	W0830 22:25:37.335391  995603 addons.go:240] addon metrics-server should already be in state true
	I0830 22:25:37.335440  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.335753  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335807  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.335807  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335847  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.335810  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335939  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.355619  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I0830 22:25:37.355760  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43941
	I0830 22:25:37.355979  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0830 22:25:37.356166  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356203  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356595  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356729  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.356748  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.356730  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.356793  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.357097  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.357114  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.357170  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357177  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357383  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.357486  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357825  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.357857  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.358246  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.358292  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.373639  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I0830 22:25:37.374107  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.374639  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.374657  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.375035  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.375359  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.377439  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.379303  995603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:25:37.378176  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0830 22:25:37.380617  995603 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-250163"
	W0830 22:25:37.380661  995603 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:25:37.380706  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.380787  995603 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:25:37.380802  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:25:37.380826  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.381081  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.381123  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.381726  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.382284  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.382304  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.382656  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.382878  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.384791  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.387018  995603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:25:37.385098  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.385806  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.388841  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.388863  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.388865  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:25:37.388883  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:25:37.388907  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.389015  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.389121  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.389274  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.392059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.392538  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.392557  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.392720  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.392861  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.392989  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.393101  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.399504  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I0830 22:25:37.399592  995603 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-250163" context rescaled to 1 replicas
	I0830 22:25:37.399627  995603 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:25:37.401322  995603 out.go:177] * Verifying Kubernetes components...
	I0830 22:25:37.400205  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.402915  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:25:37.403460  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.403485  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.403872  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.404488  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.404537  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.420598  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0830 22:25:37.421352  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.422218  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.422240  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.422714  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.422979  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.424750  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.425396  995603 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:25:37.425415  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:25:37.425439  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.428142  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.428731  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.428762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.428900  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.429077  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.429330  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.429469  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.705452  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:25:37.713345  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:25:37.736333  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:25:37.736356  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:25:37.825018  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:25:37.825051  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:25:37.858566  995603 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-250163" to be "Ready" ...
	I0830 22:25:37.858657  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:25:37.888050  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:25:37.888082  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:25:37.901662  995603 node_ready.go:49] node "old-k8s-version-250163" has status "Ready":"True"
	I0830 22:25:37.901689  995603 node_ready.go:38] duration metric: took 43.090996ms waiting for node "old-k8s-version-250163" to be "Ready" ...
	I0830 22:25:37.901701  995603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:25:37.928785  995603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:37.960479  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:25:39.232573  995603 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mx7ff" not found
	I0830 22:25:39.232603  995603 pod_ready.go:81] duration metric: took 1.303781463s waiting for pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace to be "Ready" ...
	E0830 22:25:39.232616  995603 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mx7ff" not found
	I0830 22:25:39.232630  995603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:39.305932  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.600438988s)
	I0830 22:25:39.306003  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306018  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306031  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592647384s)
	I0830 22:25:39.306084  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306106  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306088  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.447402831s)
	I0830 22:25:39.306222  995603 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0830 22:25:39.306459  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306481  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306485  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306512  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306518  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306534  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306517  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306608  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306628  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306638  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306862  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306903  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306911  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306946  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306972  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306981  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306993  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.307001  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.307338  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.307387  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.307407  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.425740  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.465201154s)
	I0830 22:25:39.425823  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.425844  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.426221  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.426260  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.426272  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.426289  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.426311  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.426584  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.426620  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.426638  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.426657  995603 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-250163"
	I0830 22:25:39.428628  995603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:25:39.430476  995603 addons.go:502] enable addons completed in 2.095405793s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:25:40.785067  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace has status "Ready":"True"
	I0830 22:25:40.785090  995603 pod_ready.go:81] duration metric: took 1.552452887s waiting for pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.785100  995603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-866k8" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.790132  995603 pod_ready.go:92] pod "kube-proxy-866k8" in "kube-system" namespace has status "Ready":"True"
	I0830 22:25:40.790158  995603 pod_ready.go:81] duration metric: took 5.051684ms waiting for pod "kube-proxy-866k8" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.790173  995603 pod_ready.go:38] duration metric: took 2.888452893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:25:40.790199  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:25:40.790247  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:25:40.805458  995603 api_server.go:72] duration metric: took 3.405792506s to wait for apiserver process to appear ...
	I0830 22:25:40.805488  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:25:40.805512  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:25:40.812389  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0830 22:25:40.813455  995603 api_server.go:141] control plane version: v1.16.0
	I0830 22:25:40.813483  995603 api_server.go:131] duration metric: took 7.983448ms to wait for apiserver health ...
	I0830 22:25:40.813520  995603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:25:40.818720  995603 system_pods.go:59] 4 kube-system pods found
	I0830 22:25:40.818741  995603 system_pods.go:61] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:40.818746  995603 system_pods.go:61] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:40.818754  995603 system_pods.go:61] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:40.818763  995603 system_pods.go:61] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:40.818768  995603 system_pods.go:74] duration metric: took 5.239623ms to wait for pod list to return data ...
	I0830 22:25:40.818776  995603 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:25:40.821982  995603 default_sa.go:45] found service account: "default"
	I0830 22:25:40.822001  995603 default_sa.go:55] duration metric: took 3.215755ms for default service account to be created ...
	I0830 22:25:40.822010  995603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:25:40.824823  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:40.824844  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:40.824850  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:40.824860  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:40.824871  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:40.824896  995603 retry.go:31] will retry after 244.703972ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.075793  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.075829  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.075838  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.075849  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.075860  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.075886  995603 retry.go:31] will retry after 325.650304ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.407202  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.407234  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.407242  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.407252  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.407262  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.407313  995603 retry.go:31] will retry after 449.708915ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.862007  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.862038  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.862043  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.862061  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.862070  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.862086  995603 retry.go:31] will retry after 484.451835ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:42.351597  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:42.351637  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:42.351646  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:42.351656  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:42.351664  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:42.351680  995603 retry.go:31] will retry after 739.711019ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:43.096340  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:43.096365  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:43.096371  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:43.096380  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:43.096387  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:43.096402  995603 retry.go:31] will retry after 871.763135ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:43.974914  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:43.974947  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:43.974954  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:43.974964  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:43.974973  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:43.974994  995603 retry.go:31] will retry after 1.11275286s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:45.093268  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:45.093293  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:45.093299  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:45.093306  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:45.093313  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:45.093329  995603 retry.go:31] will retry after 1.015840649s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:46.114920  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:46.114954  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:46.114961  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:46.114972  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:46.114982  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:46.115002  995603 retry.go:31] will retry after 1.822388925s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:47.942838  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:47.942870  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:47.942877  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:47.942887  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:47.942900  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:47.942920  995603 retry.go:31] will retry after 1.516432463s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:49.464430  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:49.464460  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:49.464465  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:49.464473  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:49.464480  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:49.464496  995603 retry.go:31] will retry after 2.558675876s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:52.028440  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:52.028469  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:52.028474  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:52.028481  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:52.028488  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:52.028503  995603 retry.go:31] will retry after 2.801664105s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:54.835174  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:54.835200  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:54.835205  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:54.835212  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:54.835219  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:54.835243  995603 retry.go:31] will retry after 3.386411543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:58.228062  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:58.228104  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:58.228113  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:58.228123  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:58.228136  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:58.228158  995603 retry.go:31] will retry after 5.58749509s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:03.822486  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:26:03.822511  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:03.822516  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:03.822523  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:03.822530  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:03.822548  995603 retry.go:31] will retry after 6.26222599s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:10.092537  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:26:10.092563  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:10.092569  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:10.092576  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:10.092582  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:10.092599  995603 retry.go:31] will retry after 6.680813015s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:16.780093  995603 system_pods.go:86] 5 kube-system pods found
	I0830 22:26:16.780120  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:16.780125  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Pending
	I0830 22:26:16.780130  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:16.780138  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:16.780145  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:16.780161  995603 retry.go:31] will retry after 9.963152707s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:26.749177  995603 system_pods.go:86] 7 kube-system pods found
	I0830 22:26:26.749205  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:26.749211  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Running
	I0830 22:26:26.749215  995603 system_pods.go:89] "kube-controller-manager-old-k8s-version-250163" [dfb636c2-5a87-4d9a-97c0-2fd763d52b69] Running
	I0830 22:26:26.749219  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:26.749223  995603 system_pods.go:89] "kube-scheduler-old-k8s-version-250163" [9d0c93a7-5cad-4a40-9d3d-3b828e33dca9] Pending
	I0830 22:26:26.749230  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:26.749237  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:26.749252  995603 retry.go:31] will retry after 8.744971537s: missing components: etcd, kube-scheduler
	I0830 22:26:35.500731  995603 system_pods.go:86] 8 kube-system pods found
	I0830 22:26:35.500759  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:35.500765  995603 system_pods.go:89] "etcd-old-k8s-version-250163" [260642d3-280e-4ae1-97bc-d15a904b3205] Running
	I0830 22:26:35.500769  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Running
	I0830 22:26:35.500775  995603 system_pods.go:89] "kube-controller-manager-old-k8s-version-250163" [dfb636c2-5a87-4d9a-97c0-2fd763d52b69] Running
	I0830 22:26:35.500779  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:35.500783  995603 system_pods.go:89] "kube-scheduler-old-k8s-version-250163" [9d0c93a7-5cad-4a40-9d3d-3b828e33dca9] Running
	I0830 22:26:35.500789  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:35.500796  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:35.500813  995603 system_pods.go:126] duration metric: took 54.67879848s to wait for k8s-apps to be running ...
	I0830 22:26:35.500827  995603 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:26:35.500876  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:26:35.519861  995603 system_svc.go:56] duration metric: took 19.021631ms WaitForService to wait for kubelet.
	I0830 22:26:35.519900  995603 kubeadm.go:581] duration metric: took 58.120243521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:26:35.519985  995603 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:26:35.524455  995603 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:26:35.524486  995603 node_conditions.go:123] node cpu capacity is 2
	I0830 22:26:35.524537  995603 node_conditions.go:105] duration metric: took 4.543152ms to run NodePressure ...
	I0830 22:26:35.524550  995603 start.go:228] waiting for startup goroutines ...
	I0830 22:26:35.524562  995603 start.go:233] waiting for cluster config update ...
	I0830 22:26:35.524573  995603 start.go:242] writing updated cluster config ...
	I0830 22:26:35.524938  995603 ssh_runner.go:195] Run: rm -f paused
	I0830 22:26:35.578723  995603 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0830 22:26:35.580954  995603 out.go:177] 
	W0830 22:26:35.582332  995603 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0830 22:26:35.583700  995603 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0830 22:26:35.585290  995603 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-250163" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:19:37 UTC, ends at Wed 2023-08-30 22:33:56 UTC. --
	Aug 30 22:33:55 no-preload-698195 crio[727]: time="2023-08-30 22:33:55.840350348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96fe5130-eb26-43d3-893d-cb2aff301abb name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.559717145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f0781542-4c70-47d6-83bf-670d67ede8aa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.559793616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f0781542-4c70-47d6-83bf-670d67ede8aa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.560141201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f0781542-4c70-47d6-83bf-670d67ede8aa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.596254095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e16fb830-5a92-47c6-b50c-338d6552c680 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.596370419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e16fb830-5a92-47c6-b50c-338d6552c680 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.596625283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e16fb830-5a92-47c6-b50c-338d6552c680 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.631299156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7dce6a68-e322-440e-ba4f-9b55fce18127 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.631356963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7dce6a68-e322-440e-ba4f-9b55fce18127 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.631621737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7dce6a68-e322-440e-ba4f-9b55fce18127 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.669160591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cec3bb8e-a547-4b44-a50a-acfd4a4d6211 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.669242425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cec3bb8e-a547-4b44-a50a-acfd4a4d6211 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.669586282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cec3bb8e-a547-4b44-a50a-acfd4a4d6211 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.704424274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1c98a5bf-9414-4d2a-bf62-dae7b1e09e40 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.704485965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1c98a5bf-9414-4d2a-bf62-dae7b1e09e40 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.704707735Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1c98a5bf-9414-4d2a-bf62-dae7b1e09e40 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.738051528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=28607473-fd98-4baa-babe-132829b64f97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.738136154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=28607473-fd98-4baa-babe-132829b64f97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.738336191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=28607473-fd98-4baa-babe-132829b64f97 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.771608305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bec63917-22e3-4365-a5b2-7242ae67b1ab name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.771702336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bec63917-22e3-4365-a5b2-7242ae67b1ab name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.772002460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bec63917-22e3-4365-a5b2-7242ae67b1ab name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.804709733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=658293d0-4d5e-483a-bb42-cb6d93bda25b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.804810566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=658293d0-4d5e-483a-bb42-cb6d93bda25b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:33:56 no-preload-698195 crio[727]: time="2023-08-30 22:33:56.805089728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=658293d0-4d5e-483a-bb42-cb6d93bda25b name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	a4ec3add6f727       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   8be5ef26dacae
	b0bd91d1795dd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   20b13e9db98e1
	61c09841e92e9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   97f611c774cf7
	2fe23692aaba2       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      13 minutes ago      Running             kube-proxy                1                   67a8ad99cda12
	c00d7aca5019d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   8be5ef26dacae
	94b2663b3d51d       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      13 minutes ago      Running             kube-scheduler            1                   578a57c0880be
	c6594d2e258e6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   e0d15cc034086
	5f90117987e5b       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      13 minutes ago      Running             kube-controller-manager   1                   2d110240fe0d0
	2aff15ad720bf       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      13 minutes ago      Running             kube-apiserver            1                   a1ae5ec95669a
	
	* 
	* ==> coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41758 - 13562 "HINFO IN 1972653659024392533.621617805422138747. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011709872s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-698195
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-698195
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=no-preload-698195
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T22_10_11_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:10:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-698195
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 22:33:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:31:10 +0000   Wed, 30 Aug 2023 22:10:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:31:10 +0000   Wed, 30 Aug 2023 22:10:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:31:10 +0000   Wed, 30 Aug 2023 22:10:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:31:10 +0000   Wed, 30 Aug 2023 22:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.28
	  Hostname:    no-preload-698195
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 182cdf5ac5c54a6098509e831cd9b243
	  System UUID:                182cdf5a-c5c5-4a60-9850-9e831cd9b243
	  Boot ID:                    8c07ffbf-69f6-418f-9bc0-2a9d95262b85
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5dd5756b68-hlwf8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-no-preload-698195                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-698195             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-698195    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-5fjvd                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-no-preload-698195             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-57f55c9bc5-nfbkd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-698195 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-698195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-698195 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node no-preload-698195 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-698195 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-698195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-698195 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-698195 event: Registered Node no-preload-698195 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-698195 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-698195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-698195 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-698195 event: Registered Node no-preload-698195 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug30 22:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074697] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.774743] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.472144] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154231] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.565919] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.313902] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.097291] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.142301] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.124993] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.276448] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[Aug30 22:20] systemd-fstab-generator[1233]: Ignoring "noauto" for root device
	[ +15.057739] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] <==
	* {"level":"info","ts":"2023-08-30T22:20:26.125084Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2d3b68a7afbccf5b","local-member-id":"dd3f57cb1d137e03","added-peer-id":"dd3f57cb1d137e03","added-peer-peer-urls":["https://192.168.72.28:2380"]}
	{"level":"info","ts":"2023-08-30T22:20:26.125288Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2d3b68a7afbccf5b","local-member-id":"dd3f57cb1d137e03","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:20:26.125322Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:20:26.14761Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-30T22:20:26.14781Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dd3f57cb1d137e03","initial-advertise-peer-urls":["https://192.168.72.28:2380"],"listen-peer-urls":["https://192.168.72.28:2380"],"advertise-client-urls":["https://192.168.72.28:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.28:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-30T22:20:26.1479Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-30T22:20:26.147954Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.28:2380"}
	{"level":"info","ts":"2023-08-30T22:20:26.14796Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.28:2380"}
	{"level":"info","ts":"2023-08-30T22:20:27.379084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-30T22:20:27.379134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-30T22:20:27.379173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 received MsgPreVoteResp from dd3f57cb1d137e03 at term 2"}
	{"level":"info","ts":"2023-08-30T22:20:27.37919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 became candidate at term 3"}
	{"level":"info","ts":"2023-08-30T22:20:27.379196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 received MsgVoteResp from dd3f57cb1d137e03 at term 3"}
	{"level":"info","ts":"2023-08-30T22:20:27.379204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 became leader at term 3"}
	{"level":"info","ts":"2023-08-30T22:20:27.379211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dd3f57cb1d137e03 elected leader dd3f57cb1d137e03 at term 3"}
	{"level":"info","ts":"2023-08-30T22:20:27.381Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dd3f57cb1d137e03","local-member-attributes":"{Name:no-preload-698195 ClientURLs:[https://192.168.72.28:2379]}","request-path":"/0/members/dd3f57cb1d137e03/attributes","cluster-id":"2d3b68a7afbccf5b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T22:20:27.381187Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:20:27.381145Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:20:27.382433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.28:2379"}
	{"level":"info","ts":"2023-08-30T22:20:27.382628Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T22:20:27.382667Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-30T22:20:27.38354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-30T22:30:27.420635Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":864}
	{"level":"info","ts":"2023-08-30T22:30:27.424001Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":864,"took":"2.768736ms","hash":3181232975}
	{"level":"info","ts":"2023-08-30T22:30:27.424164Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3181232975,"revision":864,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  22:33:57 up 14 min,  0 users,  load average: 0.03, 0.08, 0.08
	Linux no-preload-698195 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] <==
	* I0830 22:30:30.003776       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0830 22:30:30.004107       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:30:30.004255       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:30:30.004936       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:31:28.868765       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.79.55:443: connect: connection refused
	I0830 22:31:28.869120       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0830 22:31:30.004540       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:31:30.004680       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0830 22:31:30.004768       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0830 22:31:30.005810       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:31:30.005999       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:31:30.006028       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:32:28.868232       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.79.55:443: connect: connection refused
	I0830 22:32:28.868396       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0830 22:33:28.868487       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.79.55:443: connect: connection refused
	I0830 22:33:28.868731       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0830 22:33:30.005031       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:33:30.005183       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0830 22:33:30.005231       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0830 22:33:30.006268       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:33:30.006387       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:33:30.006400       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] <==
	* I0830 22:28:12.242136       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:28:41.741939       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:28:42.250595       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:29:11.748404       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:29:12.261052       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:29:41.754385       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:29:42.269303       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:30:11.760359       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:30:12.280398       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:30:41.767322       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:30:42.294621       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:31:11.776044       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:31:12.302556       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:31:41.781756       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:31:42.310971       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0830 22:31:50.903205       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="272.714µs"
	I0830 22:32:03.901117       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="110.605µs"
	E0830 22:32:11.787771       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:32:12.318976       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:32:41.795064       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:32:42.327240       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:33:11.800071       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:33:12.335625       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:33:41.806072       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:33:42.345994       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] <==
	* I0830 22:20:31.246950       1 server_others.go:69] "Using iptables proxy"
	I0830 22:20:31.257680       1 node.go:141] Successfully retrieved node IP: 192.168.72.28
	I0830 22:20:31.297066       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0830 22:20:31.297118       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0830 22:20:31.300119       1 server_others.go:152] "Using iptables Proxier"
	I0830 22:20:31.300174       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 22:20:31.300460       1 server.go:846] "Version info" version="v1.28.1"
	I0830 22:20:31.300498       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:20:31.301366       1 config.go:188] "Starting service config controller"
	I0830 22:20:31.301407       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 22:20:31.301430       1 config.go:97] "Starting endpoint slice config controller"
	I0830 22:20:31.301433       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 22:20:31.301967       1 config.go:315] "Starting node config controller"
	I0830 22:20:31.302001       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 22:20:31.401958       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 22:20:31.402033       1 shared_informer.go:318] Caches are synced for node config
	I0830 22:20:31.402056       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] <==
	* I0830 22:20:26.411312       1 serving.go:348] Generated self-signed cert in-memory
	W0830 22:20:28.972992       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0830 22:20:28.973159       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 22:20:28.973197       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0830 22:20:28.973222       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0830 22:20:29.012813       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0830 22:20:29.012979       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:20:29.015367       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0830 22:20:29.022455       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0830 22:20:29.022508       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 22:20:29.022536       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0830 22:20:29.122631       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:19:37 UTC, ends at Wed 2023-08-30 22:33:57 UTC. --
	Aug 30 22:31:22 no-preload-698195 kubelet[1239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:31:22 no-preload-698195 kubelet[1239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:31:23 no-preload-698195 kubelet[1239]: E0830 22:31:23.880208    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:31:36 no-preload-698195 kubelet[1239]: E0830 22:31:36.892181    1239 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 30 22:31:36 no-preload-698195 kubelet[1239]: E0830 22:31:36.892226    1239 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 30 22:31:36 no-preload-698195 kubelet[1239]: E0830 22:31:36.892526    1239 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9jbm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-nfbkd_kube-system(450f12e3-6554-41c5-9d41-bee735b322b3): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 30 22:31:36 no-preload-698195 kubelet[1239]: E0830 22:31:36.892564    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:31:50 no-preload-698195 kubelet[1239]: E0830 22:31:50.883927    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:32:03 no-preload-698195 kubelet[1239]: E0830 22:32:03.879973    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:32:14 no-preload-698195 kubelet[1239]: E0830 22:32:14.879384    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:32:22 no-preload-698195 kubelet[1239]: E0830 22:32:22.907249    1239 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:32:22 no-preload-698195 kubelet[1239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:32:22 no-preload-698195 kubelet[1239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:32:22 no-preload-698195 kubelet[1239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:32:26 no-preload-698195 kubelet[1239]: E0830 22:32:26.880956    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:32:41 no-preload-698195 kubelet[1239]: E0830 22:32:41.880108    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:32:56 no-preload-698195 kubelet[1239]: E0830 22:32:56.880129    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:33:10 no-preload-698195 kubelet[1239]: E0830 22:33:10.880597    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:33:22 no-preload-698195 kubelet[1239]: E0830 22:33:22.879800    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:33:22 no-preload-698195 kubelet[1239]: E0830 22:33:22.906635    1239 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:33:22 no-preload-698195 kubelet[1239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:33:22 no-preload-698195 kubelet[1239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:33:22 no-preload-698195 kubelet[1239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:33:37 no-preload-698195 kubelet[1239]: E0830 22:33:37.879679    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:33:52 no-preload-698195 kubelet[1239]: E0830 22:33:52.881211    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	
	* 
	* ==> storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] <==
	* I0830 22:21:02.231656       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 22:21:02.242487       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 22:21:02.243054       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 22:21:19.652627       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 22:21:19.652781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-698195_2bcc01c2-3faf-4b4a-b8fc-398575ecdd81!
	I0830 22:21:19.654372       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"315c45da-d624-46fd-99d0-dac8a2bd8ebf", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-698195_2bcc01c2-3faf-4b4a-b8fc-398575ecdd81 became leader
	I0830 22:21:19.753178       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-698195_2bcc01c2-3faf-4b4a-b8fc-398575ecdd81!
	
	* 
	* ==> storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] <==
	* I0830 22:20:31.096661       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0830 22:21:01.100252       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-698195 -n no-preload-698195
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-698195 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nfbkd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-698195 describe pod metrics-server-57f55c9bc5-nfbkd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-698195 describe pod metrics-server-57f55c9bc5-nfbkd: exit status 1 (67.74324ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nfbkd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-698195 describe pod metrics-server-57f55c9bc5-nfbkd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-250163 -n old-k8s-version-250163
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-08-30 22:35:36.179674224 +0000 UTC m=+5178.372424159
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-250163 -n old-k8s-version-250163
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-250163 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-250163 logs -n 25: (1.355557175s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-519738 -- sudo                         | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-519738                                 | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-184733                              | stopped-upgrade-184733       | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:09 UTC |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	| delete  | -p                                                     | disable-driver-mounts-883991 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | disable-driver-mounts-883991                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:12 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-698195             | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208903            | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-791007  | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC | 30 Aug 23 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC |                     |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-698195                  | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208903                 | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC | 30 Aug 23 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-250163        | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC | 30 Aug 23 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-791007       | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC | 30 Aug 23 22:24 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-250163             | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC | 30 Aug 23 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:16:59
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:16:59.758341  995603 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:16:59.758470  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758479  995603 out.go:309] Setting ErrFile to fd 2...
	I0830 22:16:59.758484  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758692  995603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:16:59.759241  995603 out.go:303] Setting JSON to false
	I0830 22:16:59.760232  995603 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14367,"bootTime":1693419453,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:16:59.760291  995603 start.go:138] virtualization: kvm guest
	I0830 22:16:59.762744  995603 out.go:177] * [old-k8s-version-250163] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:16:59.764395  995603 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:16:59.765863  995603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:16:59.764404  995603 notify.go:220] Checking for updates...
	I0830 22:16:59.767579  995603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:16:59.769244  995603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:16:59.771001  995603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:16:59.772625  995603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:16:59.774574  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:16:59.774929  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.775032  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.790271  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0830 22:16:59.790677  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.791257  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.791283  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.791645  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.791879  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.793885  995603 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:16:59.795414  995603 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:16:59.795716  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.795752  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.810316  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0830 22:16:59.810694  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.811176  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.811201  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.811560  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.811808  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.845962  995603 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:16:59.847399  995603 start.go:298] selected driver: kvm2
	I0830 22:16:59.847410  995603 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.847546  995603 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:16:59.848301  995603 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.848376  995603 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:16:59.862654  995603 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:16:59.863040  995603 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 22:16:59.863080  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:16:59.863094  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:16:59.863109  995603 start_flags.go:319] config:
	{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.863341  995603 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.865500  995603 out.go:177] * Starting control plane node old-k8s-version-250163 in cluster old-k8s-version-250163
	I0830 22:17:00.916070  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:16:59.866763  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:16:59.866836  995603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0830 22:16:59.866852  995603 cache.go:57] Caching tarball of preloaded images
	I0830 22:16:59.866935  995603 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:16:59.866946  995603 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0830 22:16:59.867091  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:16:59.867314  995603 start.go:365] acquiring machines lock for old-k8s-version-250163: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:17:06.996025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:10.068036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:16.148043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:19.220024  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:25.300036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:28.372088  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:34.452043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:37.524037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:43.604037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:46.676107  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:52.756100  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:55.828195  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:01.908025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:04.980079  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:11.060035  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:14.132025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:20.212050  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:23.283995  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:26.288205  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 4m29.4670209s
	I0830 22:18:26.288261  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:26.288276  994705 fix.go:54] fixHost starting: 
	I0830 22:18:26.288621  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:26.288656  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:26.304048  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0830 22:18:26.304613  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:26.305138  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:18:26.305164  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:26.305518  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:26.305719  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:26.305843  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:18:26.307597  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Stopped err=<nil>
	I0830 22:18:26.307639  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	W0830 22:18:26.307827  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:26.309985  994705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208903" ...
	I0830 22:18:26.311551  994705 main.go:141] libmachine: (embed-certs-208903) Calling .Start
	I0830 22:18:26.311750  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring networks are active...
	I0830 22:18:26.312528  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network default is active
	I0830 22:18:26.312814  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network mk-embed-certs-208903 is active
	I0830 22:18:26.313153  994705 main.go:141] libmachine: (embed-certs-208903) Getting domain xml...
	I0830 22:18:26.313857  994705 main.go:141] libmachine: (embed-certs-208903) Creating domain...
	I0830 22:18:26.285881  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:26.285939  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:18:26.288013  994624 machine.go:91] provisioned docker machine in 4m37.410947228s
	I0830 22:18:26.288063  994624 fix.go:56] fixHost completed within 4m37.432260867s
	I0830 22:18:26.288085  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 4m37.432330775s
	W0830 22:18:26.288107  994624 start.go:672] error starting host: provision: host is not running
	W0830 22:18:26.288219  994624 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0830 22:18:26.288225  994624 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:27.529120  994705 main.go:141] libmachine: (embed-certs-208903) Waiting to get IP...
	I0830 22:18:27.530028  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.530390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.530515  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.530404  996319 retry.go:31] will retry after 311.351139ms: waiting for machine to come up
	I0830 22:18:27.843013  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.843398  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.843427  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.843337  996319 retry.go:31] will retry after 367.953943ms: waiting for machine to come up
	I0830 22:18:28.213214  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.213785  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.213820  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.213722  996319 retry.go:31] will retry after 424.275825ms: waiting for machine to come up
	I0830 22:18:28.639216  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.639670  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.639707  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.639609  996319 retry.go:31] will retry after 502.321201ms: waiting for machine to come up
	I0830 22:18:29.143240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.143823  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.143850  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.143790  996319 retry.go:31] will retry after 680.495047ms: waiting for machine to come up
	I0830 22:18:29.825462  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.825879  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.825904  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.825836  996319 retry.go:31] will retry after 756.63617ms: waiting for machine to come up
	I0830 22:18:30.583723  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:30.584179  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:30.584212  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:30.584118  996319 retry.go:31] will retry after 851.722792ms: waiting for machine to come up
	I0830 22:18:31.437603  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:31.438031  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:31.438063  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:31.437986  996319 retry.go:31] will retry after 1.214893807s: waiting for machine to come up
	I0830 22:18:31.289961  994624 start.go:365] acquiring machines lock for no-preload-698195: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:32.654351  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:32.654803  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:32.654829  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:32.654756  996319 retry.go:31] will retry after 1.574180335s: waiting for machine to come up
	I0830 22:18:34.231491  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:34.231911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:34.231944  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:34.231826  996319 retry.go:31] will retry after 1.99107048s: waiting for machine to come up
	I0830 22:18:36.225911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:36.226336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:36.226363  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:36.226283  996319 retry.go:31] will retry after 1.816508761s: waiting for machine to come up
	I0830 22:18:38.044672  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:38.045061  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:38.045094  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:38.045021  996319 retry.go:31] will retry after 2.343148299s: waiting for machine to come up
	I0830 22:18:40.389346  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:40.389753  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:40.389778  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:40.389700  996319 retry.go:31] will retry after 3.682098761s: waiting for machine to come up
	I0830 22:18:45.025750  995192 start.go:369] acquired machines lock for "default-k8s-diff-port-791007" in 3m32.939054887s
	I0830 22:18:45.025823  995192 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:45.025847  995192 fix.go:54] fixHost starting: 
	I0830 22:18:45.026291  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:45.026333  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:45.041161  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0830 22:18:45.041657  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:45.042176  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:18:45.042208  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:45.042544  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:45.042748  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:18:45.042910  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:18:45.044428  995192 fix.go:102] recreateIfNeeded on default-k8s-diff-port-791007: state=Stopped err=<nil>
	I0830 22:18:45.044454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	W0830 22:18:45.044615  995192 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:45.046538  995192 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-791007" ...
	I0830 22:18:44.074916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075386  994705 main.go:141] libmachine: (embed-certs-208903) Found IP for machine: 192.168.50.159
	I0830 22:18:44.075411  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has current primary IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075418  994705 main.go:141] libmachine: (embed-certs-208903) Reserving static IP address...
	I0830 22:18:44.075899  994705 main.go:141] libmachine: (embed-certs-208903) Reserved static IP address: 192.168.50.159
	I0830 22:18:44.075928  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.075939  994705 main.go:141] libmachine: (embed-certs-208903) Waiting for SSH to be available...
	I0830 22:18:44.075959  994705 main.go:141] libmachine: (embed-certs-208903) DBG | skip adding static IP to network mk-embed-certs-208903 - found existing host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"}
	I0830 22:18:44.075968  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Getting to WaitForSSH function...
	I0830 22:18:44.078068  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.078436  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH client type: external
	I0830 22:18:44.078533  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa (-rw-------)
	I0830 22:18:44.078569  994705 main.go:141] libmachine: (embed-certs-208903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:18:44.078590  994705 main.go:141] libmachine: (embed-certs-208903) DBG | About to run SSH command:
	I0830 22:18:44.078622  994705 main.go:141] libmachine: (embed-certs-208903) DBG | exit 0
	I0830 22:18:44.167514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | SSH cmd err, output: <nil>: 
	I0830 22:18:44.167898  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetConfigRaw
	I0830 22:18:44.168594  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.170974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.171370  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171696  994705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/embed-certs-208903/config.json ...
	I0830 22:18:44.171967  994705 machine.go:88] provisioning docker machine ...
	I0830 22:18:44.171989  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:44.172184  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172371  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:18:44.172397  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172563  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.174522  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174861  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.174894  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174988  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.175159  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175286  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175413  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.175627  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.176111  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.176132  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:18:44.309192  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:18:44.309225  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.311931  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312327  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.312362  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312512  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.312727  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.312919  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.313048  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.313215  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.313623  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.313638  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:18:44.440529  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:44.440594  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:18:44.440641  994705 buildroot.go:174] setting up certificates
	I0830 22:18:44.440653  994705 provision.go:83] configureAuth start
	I0830 22:18:44.440663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.440943  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.443289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443663  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.443705  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443805  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.445987  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446297  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.446328  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446462  994705 provision.go:138] copyHostCerts
	I0830 22:18:44.446524  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:18:44.446550  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:18:44.446638  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:18:44.446750  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:18:44.446763  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:18:44.446800  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:18:44.446907  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:18:44.446919  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:18:44.446955  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:18:44.447036  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:18:44.664313  994705 provision.go:172] copyRemoteCerts
	I0830 22:18:44.664387  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:18:44.664434  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.666819  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667160  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.667192  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667338  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.667565  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.667687  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.667839  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:18:44.756922  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:18:44.780430  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:18:44.803396  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:18:44.825975  994705 provision.go:86] duration metric: configureAuth took 385.307932ms
	I0830 22:18:44.826006  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:18:44.826230  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:18:44.826334  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.828862  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829199  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.829240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829383  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.829606  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829770  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829907  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.830104  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.830593  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.830615  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:18:45.025539  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025585  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:18:45.025596  994705 machine.go:91] provisioned docker machine in 853.613637ms
	I0830 22:18:45.025627  994705 fix.go:56] fixHost completed within 18.737351046s
	I0830 22:18:45.025637  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 18.737393499s
	W0830 22:18:45.025662  994705 start.go:672] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	W0830 22:18:45.025746  994705 out.go:239] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025760  994705 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:45.047821  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Start
	I0830 22:18:45.047982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring networks are active...
	I0830 22:18:45.048684  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network default is active
	I0830 22:18:45.049040  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network mk-default-k8s-diff-port-791007 is active
	I0830 22:18:45.049401  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Getting domain xml...
	I0830 22:18:45.050009  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Creating domain...
	I0830 22:18:46.288943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting to get IP...
	I0830 22:18:46.289982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290359  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.290388  996430 retry.go:31] will retry after 228.105709ms: waiting for machine to come up
	I0830 22:18:46.519862  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520369  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.520342  996430 retry.go:31] will retry after 343.008473ms: waiting for machine to come up
	I0830 22:18:46.865023  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865426  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.865385  996430 retry.go:31] will retry after 467.017605ms: waiting for machine to come up
	I0830 22:18:50.028247  994705 start.go:365] acquiring machines lock for embed-certs-208903: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:47.334027  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334655  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.334600  996430 retry.go:31] will retry after 601.952764ms: waiting for machine to come up
	I0830 22:18:47.937980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.938387  996430 retry.go:31] will retry after 556.18277ms: waiting for machine to come up
	I0830 22:18:48.495747  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496130  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:48.496101  996430 retry.go:31] will retry after 696.126701ms: waiting for machine to come up
	I0830 22:18:49.193405  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193789  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193822  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:49.193752  996430 retry.go:31] will retry after 1.123021492s: waiting for machine to come up
	I0830 22:18:50.318326  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318710  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:50.318637  996430 retry.go:31] will retry after 1.198520166s: waiting for machine to come up
	I0830 22:18:51.518959  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519302  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519332  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:51.519244  996430 retry.go:31] will retry after 1.851352392s: waiting for machine to come up
	I0830 22:18:53.373208  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373676  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373713  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:53.373594  996430 retry.go:31] will retry after 1.789163964s: waiting for machine to come up
	I0830 22:18:55.164132  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164664  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:55.164587  996430 retry.go:31] will retry after 2.037803279s: waiting for machine to come up
	I0830 22:18:57.204503  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204957  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204984  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:57.204919  996430 retry.go:31] will retry after 3.365492251s: waiting for machine to come up
	I0830 22:19:00.572195  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:19:00.572533  996430 retry.go:31] will retry after 3.57478782s: waiting for machine to come up
	I0830 22:19:05.536665  995603 start.go:369] acquired machines lock for "old-k8s-version-250163" in 2m5.669275373s
	I0830 22:19:05.536730  995603 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:05.536751  995603 fix.go:54] fixHost starting: 
	I0830 22:19:05.537197  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:05.537240  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:05.556581  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0830 22:19:05.557016  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:05.557559  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:19:05.557590  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:05.557937  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:05.558124  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:05.558290  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:19:05.559829  995603 fix.go:102] recreateIfNeeded on old-k8s-version-250163: state=Stopped err=<nil>
	I0830 22:19:05.559871  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	W0830 22:19:05.560056  995603 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:05.562726  995603 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-250163" ...
	I0830 22:19:04.151280  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.151787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Found IP for machine: 192.168.61.104
	I0830 22:19:04.151820  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserving static IP address...
	I0830 22:19:04.151839  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has current primary IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.152254  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.152286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserved static IP address: 192.168.61.104
	I0830 22:19:04.152306  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | skip adding static IP to network mk-default-k8s-diff-port-791007 - found existing host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"}
	I0830 22:19:04.152324  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for SSH to be available...
	I0830 22:19:04.152339  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Getting to WaitForSSH function...
	I0830 22:19:04.154335  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.154701  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154791  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH client type: external
	I0830 22:19:04.154833  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa (-rw-------)
	I0830 22:19:04.154852  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:04.154868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | About to run SSH command:
	I0830 22:19:04.154879  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | exit 0
	I0830 22:19:04.251692  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:04.252182  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetConfigRaw
	I0830 22:19:04.252842  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.255184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255536  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.255571  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255850  995192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/config.json ...
	I0830 22:19:04.256118  995192 machine.go:88] provisioning docker machine ...
	I0830 22:19:04.256143  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:04.256344  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256504  995192 buildroot.go:166] provisioning hostname "default-k8s-diff-port-791007"
	I0830 22:19:04.256525  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256653  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.259010  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259366  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.259389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259509  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.259667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.260115  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.260787  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.260810  995192 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-791007 && echo "default-k8s-diff-port-791007" | sudo tee /etc/hostname
	I0830 22:19:04.403123  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-791007
	
	I0830 22:19:04.403166  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.405835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406219  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.406270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406476  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.406704  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.406892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.407047  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.407233  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.407634  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.407658  995192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-791007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-791007/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-791007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:04.549964  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:04.550002  995192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:04.550039  995192 buildroot.go:174] setting up certificates
	I0830 22:19:04.550053  995192 provision.go:83] configureAuth start
	I0830 22:19:04.550071  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.550422  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.552844  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553116  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.553150  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553313  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.555514  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.555880  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.555917  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.556036  995192 provision.go:138] copyHostCerts
	I0830 22:19:04.556100  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:04.556133  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:04.556213  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:04.556343  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:04.556354  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:04.556392  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:04.556485  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:04.556496  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:04.556528  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:04.556607  995192 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-791007 san=[192.168.61.104 192.168.61.104 localhost 127.0.0.1 minikube default-k8s-diff-port-791007]
	I0830 22:19:04.756354  995192 provision.go:172] copyRemoteCerts
	I0830 22:19:04.756413  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:04.756438  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.759134  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759511  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.759544  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759739  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.759977  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.760153  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.760297  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:04.858949  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:04.882455  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0830 22:19:04.905659  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:04.929876  995192 provision.go:86] duration metric: configureAuth took 379.794026ms
	I0830 22:19:04.929905  995192 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:04.930124  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:04.930228  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.932799  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933159  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.933192  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933316  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.933531  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933703  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.934015  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.934606  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.934633  995192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:05.266317  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:05.266349  995192 machine.go:91] provisioned docker machine in 1.010213866s
	I0830 22:19:05.266363  995192 start.go:300] post-start starting for "default-k8s-diff-port-791007" (driver="kvm2")
	I0830 22:19:05.266378  995192 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:05.266402  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.266764  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:05.266802  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.269938  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270300  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.270345  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270472  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.270650  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.270800  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.270922  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.365334  995192 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:05.369583  995192 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:05.369608  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:05.369701  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:05.369790  995192 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:05.369879  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:05.377933  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:05.401027  995192 start.go:303] post-start completed in 134.648062ms
	I0830 22:19:05.401051  995192 fix.go:56] fixHost completed within 20.37520461s
	I0830 22:19:05.401079  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.404156  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.404629  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404765  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.404960  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405138  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405260  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.405463  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:05.405917  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:05.405930  995192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:05.536449  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433945.485000324
	
	I0830 22:19:05.536479  995192 fix.go:206] guest clock: 1693433945.485000324
	I0830 22:19:05.536490  995192 fix.go:219] Guest: 2023-08-30 22:19:05.485000324 +0000 UTC Remote: 2023-08-30 22:19:05.401056033 +0000 UTC m=+233.468479321 (delta=83.944291ms)
	I0830 22:19:05.536524  995192 fix.go:190] guest clock delta is within tolerance: 83.944291ms
	I0830 22:19:05.536535  995192 start.go:83] releasing machines lock for "default-k8s-diff-port-791007", held for 20.510742441s
	I0830 22:19:05.536569  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.536868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:05.539651  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540017  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.540057  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540196  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540737  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540911  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540975  995192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:05.541036  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.541133  995192 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:05.541172  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.543846  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.543892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544250  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544317  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544411  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544540  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544627  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544707  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544792  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544865  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544926  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.544972  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.677442  995192 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:05.683243  995192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:05.832776  995192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:05.838924  995192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:05.839000  995192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:05.857231  995192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:05.857251  995192 start.go:466] detecting cgroup driver to use...
	I0830 22:19:05.857349  995192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:05.875107  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:05.888540  995192 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:05.888603  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:05.901129  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:05.914011  995192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:06.015763  995192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:06.144950  995192 docker.go:212] disabling docker service ...
	I0830 22:19:06.145052  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:06.159373  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:06.172560  995192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:06.279514  995192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:06.413719  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:06.427047  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:06.443765  995192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:06.443853  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.452621  995192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:06.452690  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.461365  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.470052  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.478685  995192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:06.487763  995192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:06.495483  995192 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:06.495551  995192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:06.508009  995192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:06.516397  995192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:06.615209  995192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:06.792388  995192 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:06.792466  995192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:06.798170  995192 start.go:534] Will wait 60s for crictl version
	I0830 22:19:06.798231  995192 ssh_runner.go:195] Run: which crictl
	I0830 22:19:06.801828  995192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:06.842351  995192 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:06.842459  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.898609  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.962179  995192 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:06.963711  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:06.966803  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967189  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:06.967225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967412  995192 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:06.972033  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:05.564313  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Start
	I0830 22:19:05.564511  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring networks are active...
	I0830 22:19:05.565235  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network default is active
	I0830 22:19:05.565567  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network mk-old-k8s-version-250163 is active
	I0830 22:19:05.565954  995603 main.go:141] libmachine: (old-k8s-version-250163) Getting domain xml...
	I0830 22:19:05.566644  995603 main.go:141] libmachine: (old-k8s-version-250163) Creating domain...
	I0830 22:19:06.869485  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting to get IP...
	I0830 22:19:06.870595  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:06.871071  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:06.871133  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:06.871046  996542 retry.go:31] will retry after 294.811471ms: waiting for machine to come up
	I0830 22:19:07.167657  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.168126  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.168172  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.168099  996542 retry.go:31] will retry after 376.474639ms: waiting for machine to come up
	I0830 22:19:07.546876  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.547389  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.547419  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.547354  996542 retry.go:31] will retry after 329.757182ms: waiting for machine to come up
	I0830 22:19:07.878995  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.879572  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.879601  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.879529  996542 retry.go:31] will retry after 567.335814ms: waiting for machine to come up
	I0830 22:19:08.448373  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.448996  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.449028  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.448958  996542 retry.go:31] will retry after 510.216093ms: waiting for machine to come up
	I0830 22:19:08.960855  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.961412  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.961451  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.961326  996542 retry.go:31] will retry after 688.575912ms: waiting for machine to come up
	I0830 22:19:09.651966  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:09.652379  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:09.652411  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:09.652346  996542 retry.go:31] will retry after 1.130912238s: waiting for machine to come up
	I0830 22:19:06.984632  995192 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:06.984698  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:07.020200  995192 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:07.020282  995192 ssh_runner.go:195] Run: which lz4
	I0830 22:19:07.024254  995192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:07.028470  995192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:07.028508  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 22:19:08.986852  995192 crio.go:444] Took 1.962647 seconds to copy over tarball
	I0830 22:19:08.986915  995192 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:10.784839  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:10.785424  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:10.785456  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:10.785355  996542 retry.go:31] will retry after 898.98114ms: waiting for machine to come up
	I0830 22:19:11.685890  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:11.686614  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:11.686646  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:11.686558  996542 retry.go:31] will retry after 1.621086004s: waiting for machine to come up
	I0830 22:19:13.310234  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:13.310696  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:13.310721  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:13.310630  996542 retry.go:31] will retry after 1.652651656s: waiting for machine to come up
	I0830 22:19:12.113071  995192 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.126115747s)
	I0830 22:19:12.113107  995192 crio.go:451] Took 3.126230 seconds to extract the tarball
	I0830 22:19:12.113120  995192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:12.156320  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:12.200547  995192 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:19:12.200573  995192 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:19:12.200652  995192 ssh_runner.go:195] Run: crio config
	I0830 22:19:12.273153  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:12.273180  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:12.273205  995192 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:12.273231  995192 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-791007 NodeName:default-k8s-diff-port-791007 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:19:12.273413  995192 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-791007"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:12.273497  995192 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-791007 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0830 22:19:12.273573  995192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:19:12.283536  995192 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:12.283609  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:12.292260  995192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0830 22:19:12.309407  995192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:12.325757  995192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0830 22:19:12.342664  995192 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:12.346459  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:12.358721  995192 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007 for IP: 192.168.61.104
	I0830 22:19:12.358797  995192 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:12.359010  995192 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:12.359066  995192 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:12.359147  995192 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.key
	I0830 22:19:12.359219  995192 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key.a202b4d9
	I0830 22:19:12.359255  995192 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key
	I0830 22:19:12.359363  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:12.359390  995192 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:12.359400  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:12.359424  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:12.359449  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:12.359471  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:12.359507  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:12.360328  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:12.385275  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0830 22:19:12.410697  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:12.434240  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0830 22:19:12.457206  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:12.484695  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:12.507670  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:12.531114  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:12.554501  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:12.579425  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:12.603211  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:12.628506  995192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:12.645536  995192 ssh_runner.go:195] Run: openssl version
	I0830 22:19:12.650882  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:12.660449  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665173  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665239  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.670785  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:12.681196  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:12.690775  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695204  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695262  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.700668  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:12.710205  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:12.719691  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724744  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724803  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.730472  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:12.740194  995192 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:12.744773  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:12.750633  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:12.756228  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:12.762258  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:12.767895  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:12.773716  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:12.779716  995192 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-791007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:12.779849  995192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:12.779895  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:12.808983  995192 cri.go:89] found id: ""
	I0830 22:19:12.809055  995192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:12.818188  995192 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:12.818208  995192 kubeadm.go:636] restartCluster start
	I0830 22:19:12.818258  995192 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:12.829333  995192 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.830440  995192 kubeconfig.go:92] found "default-k8s-diff-port-791007" server: "https://192.168.61.104:8444"
	I0830 22:19:12.833172  995192 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:12.841419  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.841468  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.852072  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.852092  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.852135  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.862195  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.362894  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.362981  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.374932  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.862450  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.862558  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.874629  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.363249  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.363368  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.375071  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.862656  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.862767  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.874077  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.363282  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.363389  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.374762  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.862279  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.862375  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.873942  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.362457  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.362554  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.373922  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.862336  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.862415  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.873540  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.964585  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:14.965020  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:14.965042  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:14.964995  996542 retry.go:31] will retry after 1.89297354s: waiting for machine to come up
	I0830 22:19:16.859309  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:16.859825  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:16.859852  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:16.859777  996542 retry.go:31] will retry after 2.908196896s: waiting for machine to come up
	I0830 22:19:17.363243  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.363347  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.378177  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:17.862706  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.862785  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.877394  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.363052  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.363183  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.377397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.862918  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.862995  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.878397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.362972  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.363052  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.374591  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.863153  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.863233  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.878572  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.362613  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.362703  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.374006  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.862535  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.862634  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.874066  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.362612  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.362721  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.375262  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.863011  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.863113  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.874498  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.771969  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:19.772453  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:19.772482  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:19.772410  996542 retry.go:31] will retry after 3.967899631s: waiting for machine to come up
	I0830 22:19:23.743741  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744344  995603 main.go:141] libmachine: (old-k8s-version-250163) Found IP for machine: 192.168.39.10
	I0830 22:19:23.744371  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserving static IP address...
	I0830 22:19:23.744387  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has current primary IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744827  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.744860  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserved static IP address: 192.168.39.10
	I0830 22:19:23.744877  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | skip adding static IP to network mk-old-k8s-version-250163 - found existing host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"}
	I0830 22:19:23.744904  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Getting to WaitForSSH function...
	I0830 22:19:23.744920  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting for SSH to be available...
	I0830 22:19:23.747285  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747642  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.747676  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747864  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH client type: external
	I0830 22:19:23.747896  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa (-rw-------)
	I0830 22:19:23.747935  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:23.747954  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | About to run SSH command:
	I0830 22:19:23.747971  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | exit 0
	I0830 22:19:23.836434  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:23.837035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetConfigRaw
	I0830 22:19:23.837845  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:23.840648  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.841088  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841433  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:19:23.841663  995603 machine.go:88] provisioning docker machine ...
	I0830 22:19:23.841688  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:23.841895  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842049  995603 buildroot.go:166] provisioning hostname "old-k8s-version-250163"
	I0830 22:19:23.842069  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842291  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.844953  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845376  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.845408  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845678  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.845885  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846186  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.846361  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.846839  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.846861  995603 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-250163 && echo "old-k8s-version-250163" | sudo tee /etc/hostname
	I0830 22:19:23.981507  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-250163
	
	I0830 22:19:23.981556  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.984891  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.985249  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985369  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.985604  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.985811  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.986000  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.986199  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.986603  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.986620  995603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-250163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-250163/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-250163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:24.115894  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:24.115952  995603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:24.115985  995603 buildroot.go:174] setting up certificates
	I0830 22:19:24.115996  995603 provision.go:83] configureAuth start
	I0830 22:19:24.116014  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:24.116342  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:24.118887  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119266  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.119312  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119572  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.122166  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122551  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.122590  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122700  995603 provision.go:138] copyHostCerts
	I0830 22:19:24.122769  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:24.122793  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:24.122868  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:24.122989  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:24.123004  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:24.123035  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:24.123168  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:24.123184  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:24.123217  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:24.123302  995603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-250163 san=[192.168.39.10 192.168.39.10 localhost 127.0.0.1 minikube old-k8s-version-250163]
	I0830 22:19:24.303093  995603 provision.go:172] copyRemoteCerts
	I0830 22:19:24.303156  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:24.303182  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.305900  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306173  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.306199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306352  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.306545  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.306728  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.306873  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.393858  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:24.418791  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:19:24.441090  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:24.462926  995603 provision.go:86] duration metric: configureAuth took 346.913079ms
	I0830 22:19:24.462952  995603 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:24.463136  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:19:24.463224  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.465978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466321  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.466357  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466559  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.466785  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.466934  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.467035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.467173  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.467657  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.467676  995603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:25.058077  994624 start.go:369] acquired machines lock for "no-preload-698195" in 53.768050843s
	I0830 22:19:25.058128  994624 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:25.058141  994624 fix.go:54] fixHost starting: 
	I0830 22:19:25.058564  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:25.058603  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:25.076580  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0830 22:19:25.077082  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:25.077788  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:19:25.077824  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:25.078214  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:25.078418  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:25.078695  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:19:25.080411  994624 fix.go:102] recreateIfNeeded on no-preload-698195: state=Stopped err=<nil>
	I0830 22:19:25.080447  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	W0830 22:19:25.080636  994624 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:25.082566  994624 out.go:177] * Restarting existing kvm2 VM for "no-preload-698195" ...
	I0830 22:19:24.795523  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:24.795562  995603 machine.go:91] provisioned docker machine in 953.87669ms
	I0830 22:19:24.795575  995603 start.go:300] post-start starting for "old-k8s-version-250163" (driver="kvm2")
	I0830 22:19:24.795590  995603 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:24.795616  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:24.795984  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:24.796046  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.799136  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799534  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.799561  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799797  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.799996  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.800210  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.800396  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.890335  995603 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:24.894780  995603 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:24.894807  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:24.894890  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:24.894986  995603 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:24.895110  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:24.907259  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:24.932802  995603 start.go:303] post-start completed in 137.211475ms
	I0830 22:19:24.932829  995603 fix.go:56] fixHost completed within 19.396077949s
	I0830 22:19:24.932858  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.935762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936118  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.936160  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936310  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.936538  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936721  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936918  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.937109  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.937748  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.937767  995603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:25.057876  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433965.004650095
	
	I0830 22:19:25.057911  995603 fix.go:206] guest clock: 1693433965.004650095
	I0830 22:19:25.057924  995603 fix.go:219] Guest: 2023-08-30 22:19:25.004650095 +0000 UTC Remote: 2023-08-30 22:19:24.932833395 +0000 UTC m=+145.224486267 (delta=71.8167ms)
	I0830 22:19:25.057987  995603 fix.go:190] guest clock delta is within tolerance: 71.8167ms
	I0830 22:19:25.057998  995603 start.go:83] releasing machines lock for "old-k8s-version-250163", held for 19.521294969s
	I0830 22:19:25.058036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.058351  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:25.061325  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061749  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.061782  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061965  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062635  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062921  995603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:25.062977  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.063084  995603 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:25.063119  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.065978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066217  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066375  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066428  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066620  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066668  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066784  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066806  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.066953  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.067142  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067206  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067278  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.067389  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.181017  995603 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:25.188428  995603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:25.337310  995603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:25.346144  995603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:25.346231  995603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:25.368931  995603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:25.368966  995603 start.go:466] detecting cgroup driver to use...
	I0830 22:19:25.369048  995603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:25.383524  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:25.399296  995603 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:25.399365  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:25.416387  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:25.430426  995603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:25.552861  995603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:25.699278  995603 docker.go:212] disabling docker service ...
	I0830 22:19:25.699350  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:25.718108  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:25.736420  995603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:25.871165  995603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:25.993674  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:26.009215  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:26.027014  995603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0830 22:19:26.027122  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.038902  995603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:26.038985  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.051908  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.062635  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.073049  995603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:26.086514  995603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:26.098352  995603 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:26.098405  995603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:26.117326  995603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:26.129854  995603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:26.259656  995603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:26.476938  995603 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:26.477034  995603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:26.482773  995603 start.go:534] Will wait 60s for crictl version
	I0830 22:19:26.482841  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:26.486853  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:26.525498  995603 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:26.525595  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.585226  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.641386  995603 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0830 22:19:22.362364  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:22.362448  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:22.373701  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:22.842449  995192 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:22.842531  995192 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:22.842551  995192 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:22.842623  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:22.871557  995192 cri.go:89] found id: ""
	I0830 22:19:22.871624  995192 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:22.886295  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:22.894486  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:22.894549  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902556  995192 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902578  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.017775  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.631493  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.831074  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.923222  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.994499  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:23.994583  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.007515  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.519195  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.019167  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.519068  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.019708  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.519664  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.547751  995192 api_server.go:72] duration metric: took 2.553248139s to wait for apiserver process to appear ...
	I0830 22:19:26.547794  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:26.547816  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:25.084008  994624 main.go:141] libmachine: (no-preload-698195) Calling .Start
	I0830 22:19:25.084189  994624 main.go:141] libmachine: (no-preload-698195) Ensuring networks are active...
	I0830 22:19:25.085011  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network default is active
	I0830 22:19:25.085319  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network mk-no-preload-698195 is active
	I0830 22:19:25.085676  994624 main.go:141] libmachine: (no-preload-698195) Getting domain xml...
	I0830 22:19:25.086427  994624 main.go:141] libmachine: (no-preload-698195) Creating domain...
	I0830 22:19:26.443042  994624 main.go:141] libmachine: (no-preload-698195) Waiting to get IP...
	I0830 22:19:26.444179  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.444691  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.444784  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.444686  996676 retry.go:31] will retry after 208.17912ms: waiting for machine to come up
	I0830 22:19:26.654132  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.654621  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.654651  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.654581  996676 retry.go:31] will retry after 304.249592ms: waiting for machine to come up
	I0830 22:19:26.960205  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.960990  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.961014  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.960912  996676 retry.go:31] will retry after 342.108913ms: waiting for machine to come up
	I0830 22:19:27.304766  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.305661  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.305700  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.305602  996676 retry.go:31] will retry after 500.147687ms: waiting for machine to come up
	I0830 22:19:27.808375  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.808867  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.808884  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.808796  996676 retry.go:31] will retry after 562.543443ms: waiting for machine to come up
	I0830 22:19:28.373420  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:28.373912  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:28.373938  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:28.373863  996676 retry.go:31] will retry after 755.787662ms: waiting for machine to come up
	I0830 22:19:26.642985  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:26.646304  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646712  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:26.646773  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646957  995603 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:26.652439  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:26.667339  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:19:26.667418  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:26.703670  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:26.703750  995603 ssh_runner.go:195] Run: which lz4
	I0830 22:19:26.708087  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:26.712329  995603 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:26.712362  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0830 22:19:28.602303  995603 crio.go:444] Took 1.894253 seconds to copy over tarball
	I0830 22:19:28.602408  995603 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:30.838763  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.838807  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:30.838824  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:30.908950  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.908987  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:31.409372  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.420411  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.420480  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:31.909095  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.916778  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.916813  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:29.130983  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:29.131530  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:29.131565  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:29.131459  996676 retry.go:31] will retry after 951.657872ms: waiting for machine to come up
	I0830 22:19:30.084853  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:30.085280  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:30.085306  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:30.085247  996676 retry.go:31] will retry after 1.469099841s: waiting for machine to come up
	I0830 22:19:31.556432  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:31.556893  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:31.556918  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:31.556809  996676 retry.go:31] will retry after 1.217757948s: waiting for machine to come up
	I0830 22:19:32.775796  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:32.776120  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:32.776152  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:32.776080  996676 retry.go:31] will retry after 2.032727742s: waiting for machine to come up
	I0830 22:19:31.859924  995603 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.257478408s)
	I0830 22:19:31.859957  995603 crio.go:451] Took 3.257622 seconds to extract the tarball
	I0830 22:19:31.859970  995603 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:31.917027  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:31.965752  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:31.965777  995603 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:31.965886  995603 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.965944  995603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.965980  995603 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.965879  995603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.966084  995603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.965878  995603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.965967  995603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:31.965901  995603 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968024  995603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.968045  995603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.968079  995603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.968186  995603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.968191  995603 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.968193  995603 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968248  995603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.968766  995603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.140478  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.140975  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.157997  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0830 22:19:32.159468  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.159950  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.160033  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.161682  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.255481  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:32.261235  995603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0830 22:19:32.261291  995603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.261340  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.282724  995603 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0830 22:19:32.282781  995603 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.282854  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378268  995603 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0830 22:19:32.378372  995603 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0830 22:19:32.378417  995603 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0830 22:19:32.378507  995603 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.378551  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378377  995603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0830 22:19:32.378578  995603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0830 22:19:32.378591  995603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.378600  995603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.378295  995603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378632  995603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.378439  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378657  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.468864  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.468935  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.469002  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.469032  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.469123  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.469183  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0830 22:19:32.469184  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.563508  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0830 22:19:32.563630  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0830 22:19:32.586962  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0830 22:19:32.587044  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0830 22:19:32.587059  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0830 22:19:32.587115  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0830 22:19:32.587208  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.587265  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0830 22:19:32.592221  995603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0830 22:19:32.592246  995603 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.592300  995603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0830 22:19:34.254194  995603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.661863162s)
	I0830 22:19:34.254235  995603 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0830 22:19:34.254281  995603 cache_images.go:92] LoadImages completed in 2.288490025s
	W0830 22:19:34.254418  995603 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0830 22:19:34.254514  995603 ssh_runner.go:195] Run: crio config
	I0830 22:19:34.338842  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.338876  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.338903  995603 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:34.338929  995603 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-250163 NodeName:old-k8s-version-250163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0830 22:19:34.339134  995603 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-250163"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-250163
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.10:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:34.339240  995603 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-250163 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:19:34.339313  995603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0830 22:19:34.348990  995603 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:34.349076  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:34.358084  995603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0830 22:19:34.376989  995603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:34.396552  995603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0830 22:19:34.416666  995603 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:34.421910  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:34.436393  995603 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163 for IP: 192.168.39.10
	I0830 22:19:34.436490  995603 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:34.436717  995603 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:34.436774  995603 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:34.436867  995603 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.key
	I0830 22:19:34.436944  995603 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key.713efbbe
	I0830 22:19:34.437006  995603 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key
	I0830 22:19:34.437140  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:34.437187  995603 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:34.437203  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:34.437249  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:34.437284  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:34.437320  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:34.437388  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:34.438079  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:34.470943  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:19:34.503477  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:34.533783  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:19:34.562423  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:34.594418  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:34.625417  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:34.657444  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:34.689407  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:34.719004  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:34.745856  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:32.410110  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.418241  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.418269  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:32.910053  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.915839  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.915870  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.410086  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.488115  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.488161  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.909647  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.915252  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.915284  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.409978  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.418957  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:34.418995  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.909561  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.925400  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:19:34.938760  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:19:34.938793  995192 api_server.go:131] duration metric: took 8.390990557s to wait for apiserver health ...
	I0830 22:19:34.938804  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.938813  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.941052  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:34.942805  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:34.967544  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:34.998450  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:35.012600  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:35.012681  995192 system_pods.go:61] "coredns-5dd5756b68-992p2" [83ad338b-0338-45c3-a5ed-f772d100046b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:35.012702  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [4ed4f652-47c4-4d79-b8a8-dd0cc778bce0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:19:35.012714  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [c01b9dfc-ad6f-4348-abf0-fde4a64bfa98] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:19:35.012732  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [94cbccaf-3d5a-480c-8ee0-b8af5030909d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:19:35.012748  995192 system_pods.go:61] "kube-proxy-vckmf" [03f05466-f99b-4803-9164-233bfb9e7bb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:19:35.012760  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [2c5e190d-c93b-400a-8538-e31cc2016cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:19:35.012774  995192 system_pods.go:61] "metrics-server-57f55c9bc5-p8pp2" [4eaff1be-4258-427b-a110-47dabbffecee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:19:35.012788  995192 system_pods.go:61] "storage-provisioner" [8db3da8b-8256-405d-8d9c-79fdb6da8ab2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:19:35.012800  995192 system_pods.go:74] duration metric: took 14.324835ms to wait for pod list to return data ...
	I0830 22:19:35.012814  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:35.024186  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:35.024216  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:35.024229  995192 node_conditions.go:105] duration metric: took 11.409776ms to run NodePressure ...
	I0830 22:19:35.024284  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:35.318824  995192 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324484  995192 kubeadm.go:787] kubelet initialised
	I0830 22:19:35.324512  995192 kubeadm.go:788] duration metric: took 5.656923ms waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324525  995192 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:19:35.334137  995192 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:34.810276  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:34.810797  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:34.810836  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:34.810732  996676 retry.go:31] will retry after 2.550508742s: waiting for machine to come up
	I0830 22:19:37.364002  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:37.364550  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:37.364582  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:37.364489  996676 retry.go:31] will retry after 2.230782644s: waiting for machine to come up
	I0830 22:19:34.771235  995603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:34.787672  995603 ssh_runner.go:195] Run: openssl version
	I0830 22:19:34.793400  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:34.803208  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808108  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808166  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.814296  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:34.824791  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:34.838527  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844726  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844789  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.852442  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:34.862510  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:34.875456  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880581  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880702  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.886591  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:34.897133  995603 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:34.902292  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:34.908905  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:34.915276  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:34.921204  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:34.927878  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:34.934091  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:34.940851  995603 kubeadm.go:404] StartCluster: {Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:34.940966  995603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:34.941036  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:34.978950  995603 cri.go:89] found id: ""
	I0830 22:19:34.979038  995603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:34.988290  995603 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:34.988324  995603 kubeadm.go:636] restartCluster start
	I0830 22:19:34.988403  995603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:34.998277  995603 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:34.999385  995603 kubeconfig.go:92] found "old-k8s-version-250163" server: "https://192.168.39.10:8443"
	I0830 22:19:35.002017  995603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:35.013903  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.013962  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.028780  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.028800  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.028845  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.043243  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.543986  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.544109  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.555939  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.044164  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.044259  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.055496  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.544110  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.544243  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.555999  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.043535  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.043628  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.055019  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.543435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.543546  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.558778  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.044367  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.044482  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.058777  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.543327  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.543431  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.555133  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.043720  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.043874  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.059955  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.543461  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.543625  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.558707  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.360241  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:39.363755  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:40.357373  995192 pod_ready.go:92] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:40.357396  995192 pod_ready.go:81] duration metric: took 5.023230161s waiting for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:40.357409  995192 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:39.597197  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:39.597650  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:39.597684  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:39.597603  996676 retry.go:31] will retry after 3.562835127s: waiting for machine to come up
	I0830 22:19:43.161572  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:43.162020  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:43.162054  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:43.161973  996676 retry.go:31] will retry after 5.409514109s: waiting for machine to come up
	I0830 22:19:40.044014  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.044104  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.059377  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:40.543910  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.544012  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.555295  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.043380  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.043493  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.055443  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.544046  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.544121  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.555832  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.043785  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.043876  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.054809  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.543376  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.543463  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.554254  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.043435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.043543  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.054734  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.544308  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.544418  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.555603  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.044211  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.044291  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.055403  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.544013  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.544117  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.555197  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.378396  995192 pod_ready.go:102] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:42.881428  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.881456  995192 pod_ready.go:81] duration metric: took 2.524040213s waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.881467  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892688  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.892718  995192 pod_ready.go:81] duration metric: took 11.243576ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892731  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898434  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.898463  995192 pod_ready.go:81] duration metric: took 5.721888ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898476  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904261  995192 pod_ready.go:92] pod "kube-proxy-vckmf" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.904287  995192 pod_ready.go:81] duration metric: took 5.803127ms waiting for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904299  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153736  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:43.153763  995192 pod_ready.go:81] duration metric: took 249.454932ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153777  995192 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:45.462667  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:48.575718  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576172  994624 main.go:141] libmachine: (no-preload-698195) Found IP for machine: 192.168.72.28
	I0830 22:19:48.576206  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has current primary IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576217  994624 main.go:141] libmachine: (no-preload-698195) Reserving static IP address...
	I0830 22:19:48.576671  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.576719  994624 main.go:141] libmachine: (no-preload-698195) Reserved static IP address: 192.168.72.28
	I0830 22:19:48.576754  994624 main.go:141] libmachine: (no-preload-698195) DBG | skip adding static IP to network mk-no-preload-698195 - found existing host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"}
	I0830 22:19:48.576776  994624 main.go:141] libmachine: (no-preload-698195) DBG | Getting to WaitForSSH function...
	I0830 22:19:48.576792  994624 main.go:141] libmachine: (no-preload-698195) Waiting for SSH to be available...
	I0830 22:19:48.578953  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579261  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.579290  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579398  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH client type: external
	I0830 22:19:48.579417  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa (-rw-------)
	I0830 22:19:48.579451  994624 main.go:141] libmachine: (no-preload-698195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:48.579478  994624 main.go:141] libmachine: (no-preload-698195) DBG | About to run SSH command:
	I0830 22:19:48.579493  994624 main.go:141] libmachine: (no-preload-698195) DBG | exit 0
	I0830 22:19:48.679834  994624 main.go:141] libmachine: (no-preload-698195) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:48.680237  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetConfigRaw
	I0830 22:19:48.681064  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.683388  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.683844  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.683884  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.684153  994624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/config.json ...
	I0830 22:19:48.684435  994624 machine.go:88] provisioning docker machine ...
	I0830 22:19:48.684462  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:48.684708  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.684851  994624 buildroot.go:166] provisioning hostname "no-preload-698195"
	I0830 22:19:48.684883  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.685013  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.687508  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.687975  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.688018  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.688198  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.688413  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688599  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688830  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.689061  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.689695  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.689718  994624 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-698195 && echo "no-preload-698195" | sudo tee /etc/hostname
	I0830 22:19:45.014985  995603 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:45.015030  995603 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:45.015045  995603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:45.015102  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:45.049952  995603 cri.go:89] found id: ""
	I0830 22:19:45.050039  995603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:45.065202  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:45.074198  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:45.074330  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083407  995603 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083438  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:45.211527  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.256339  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.044735651s)
	I0830 22:19:46.256389  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.469714  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.542945  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.644533  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:46.644632  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:46.659432  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.182415  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.682613  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.182661  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.206336  995603 api_server.go:72] duration metric: took 1.561801361s to wait for apiserver process to appear ...
	I0830 22:19:48.206374  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:48.206399  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:50.136893  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 1m0.108561967s
	I0830 22:19:50.136941  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:50.136952  994705 fix.go:54] fixHost starting: 
	I0830 22:19:50.137347  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:50.137386  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:50.156678  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0830 22:19:50.157148  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:50.157739  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:19:50.157765  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:50.158103  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:50.158283  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.158445  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:19:50.160098  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Running err=<nil>
	W0830 22:19:50.160115  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:50.162162  994705 out.go:177] * Updating the running kvm2 "embed-certs-208903" VM ...
	I0830 22:19:50.163634  994705 machine.go:88] provisioning docker machine ...
	I0830 22:19:50.163663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.163906  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164077  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:19:50.164104  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164288  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.166831  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167198  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.167234  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167371  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.167561  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167731  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167902  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.168108  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.168592  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.168610  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:19:50.306738  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:19:50.306772  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.309523  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.309929  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.309974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.310182  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.310349  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310638  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310845  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.311027  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.311610  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.311644  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:50.433972  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:50.434005  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:50.434045  994705 buildroot.go:174] setting up certificates
	I0830 22:19:50.434057  994705 provision.go:83] configureAuth start
	I0830 22:19:50.434069  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.434388  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:19:50.437450  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.437883  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.437916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.438115  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.440654  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.441059  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441213  994705 provision.go:138] copyHostCerts
	I0830 22:19:50.441271  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:50.441283  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:50.441352  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:50.441453  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:50.441462  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:50.441481  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:50.441563  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:50.441575  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:50.441606  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:50.441684  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:19:50.721978  994705 provision.go:172] copyRemoteCerts
	I0830 22:19:50.722039  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:50.722072  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.724893  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725257  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.725289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725571  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.725799  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.726014  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.726181  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:19:50.817217  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:50.843335  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:50.869233  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:50.897508  994705 provision.go:86] duration metric: configureAuth took 463.432948ms
	I0830 22:19:50.897544  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:50.897804  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:50.897904  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.900633  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.901040  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901210  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.901404  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901547  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901680  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.901875  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.902287  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.902310  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:51.128816  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.128855  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:19:51.128866  994705 machine.go:91] provisioned docker machine in 965.212906ms
	I0830 22:19:51.128900  994705 fix.go:56] fixHost completed within 991.948899ms
	I0830 22:19:51.128906  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 991.990648ms
	W0830 22:19:51.129050  994705 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p embed-certs-208903" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.131823  994705 out.go:177] 
	W0830 22:19:51.133957  994705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0830 22:19:51.133985  994705 out.go:239] * 
	W0830 22:19:51.134788  994705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:19:51.136212  994705 out.go:177] 
	I0830 22:19:48.842387  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-698195
	
	I0830 22:19:48.842438  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.845727  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.846100  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.846140  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.846429  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.846658  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.846856  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.846991  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.847159  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.847578  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.847601  994624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-698195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-698195/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-698195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:48.994130  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:48.994176  994624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:48.994211  994624 buildroot.go:174] setting up certificates
	I0830 22:19:48.994244  994624 provision.go:83] configureAuth start
	I0830 22:19:48.994270  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.994612  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.997772  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.998170  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.998208  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.998416  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.001089  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.001466  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.001498  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.001639  994624 provision.go:138] copyHostCerts
	I0830 22:19:49.001702  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:49.001733  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:49.001808  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:49.001927  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:49.001937  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:49.001967  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:49.002042  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:49.002057  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:49.002085  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:49.002169  994624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.no-preload-698195 san=[192.168.72.28 192.168.72.28 localhost 127.0.0.1 minikube no-preload-698195]
	I0830 22:19:49.376465  994624 provision.go:172] copyRemoteCerts
	I0830 22:19:49.376534  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:49.376565  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.379932  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.380313  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.380354  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.380486  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.380738  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.380949  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.381109  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:49.474102  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:49.496563  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:49.518034  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:49.539392  994624 provision.go:86] duration metric: configureAuth took 545.126518ms
	I0830 22:19:49.539419  994624 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:49.539623  994624 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:49.539719  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.542336  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.542665  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.542738  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.542839  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.543026  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.543217  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.543341  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.543459  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:49.543864  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:49.543882  994624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:49.869021  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:49.869051  994624 machine.go:91] provisioned docker machine in 1.184598655s
	I0830 22:19:49.869065  994624 start.go:300] post-start starting for "no-preload-698195" (driver="kvm2")
	I0830 22:19:49.869079  994624 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:49.869110  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:49.869444  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:49.869481  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.871931  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.872288  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.872333  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.872502  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.872706  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.872888  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.873027  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:49.969286  994624 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:49.973513  994624 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:49.973532  994624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:49.973598  994624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:49.973671  994624 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:49.973768  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:49.982880  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:50.006097  994624 start.go:303] post-start completed in 137.016363ms
	I0830 22:19:50.006124  994624 fix.go:56] fixHost completed within 24.947983055s
	I0830 22:19:50.006150  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.008513  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.008880  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.008908  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.009134  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.009371  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.009560  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.009755  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.009933  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.010372  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:50.010402  994624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:50.136709  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433990.121404659
	
	I0830 22:19:50.136738  994624 fix.go:206] guest clock: 1693433990.121404659
	I0830 22:19:50.136748  994624 fix.go:219] Guest: 2023-08-30 22:19:50.121404659 +0000 UTC Remote: 2023-08-30 22:19:50.006128322 +0000 UTC m=+361.306139641 (delta=115.276337ms)
	I0830 22:19:50.136792  994624 fix.go:190] guest clock delta is within tolerance: 115.276337ms
	I0830 22:19:50.136800  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 25.078698183s
	I0830 22:19:50.136834  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.137143  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:50.139834  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.140214  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.140249  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.140387  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.140890  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.141088  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.141191  994624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:50.141243  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.141312  994624 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:50.141335  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.144030  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144283  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144434  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.144462  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144598  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.144736  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.144768  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144791  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.144912  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.144987  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.145152  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.145161  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:50.145318  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.145433  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:50.257719  994624 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:50.263507  994624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:50.411574  994624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:50.418796  994624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:50.418872  994624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:50.435922  994624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:50.435943  994624 start.go:466] detecting cgroup driver to use...
	I0830 22:19:50.436022  994624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:50.450969  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:50.463538  994624 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:50.463596  994624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:50.475797  994624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:50.488143  994624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:50.586327  994624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:50.697497  994624 docker.go:212] disabling docker service ...
	I0830 22:19:50.697587  994624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:50.712369  994624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:50.726039  994624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:50.840596  994624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:50.967799  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:50.984629  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:51.006331  994624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:51.006404  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.017150  994624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:51.017241  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.028714  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.040075  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.054520  994624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:51.067179  994624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:51.077610  994624 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:51.077685  994624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:51.093337  994624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:51.104110  994624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:51.243534  994624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:51.455149  994624 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:51.455232  994624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:51.462110  994624 start.go:534] Will wait 60s for crictl version
	I0830 22:19:51.462185  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:51.468872  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:51.509838  994624 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:51.509924  994624 ssh_runner.go:195] Run: crio --version
	I0830 22:19:51.562065  994624 ssh_runner.go:195] Run: crio --version
	I0830 22:19:51.630813  994624 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:47.961668  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:50.461541  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:51.632256  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:51.636020  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:51.636430  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:51.636458  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:51.636633  994624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:51.641003  994624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:51.655539  994624 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:51.655595  994624 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:51.691423  994624 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:51.691455  994624 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:51.691508  994624 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.691795  994624 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.691800  994624 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.691932  994624 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.692015  994624 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.692204  994624 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.692383  994624 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0830 22:19:51.693156  994624 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.693256  994624 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.693294  994624 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.693393  994624 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.693613  994624 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.693700  994624 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0830 22:19:51.693767  994624 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.694704  994624 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.695502  994624 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.858227  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.862141  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.862588  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.864659  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.872937  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0830 22:19:51.885087  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.912710  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.970615  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.978831  994624 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0830 22:19:51.978883  994624 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.978930  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.004057  994624 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0830 22:19:52.004112  994624 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:52.004153  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.031261  994624 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0830 22:19:52.031330  994624 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:52.031350  994624 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0830 22:19:52.031393  994624 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:52.031456  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.031394  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168753  994624 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0830 22:19:52.168817  994624 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:52.168842  994624 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0830 22:19:52.168760  994624 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0830 22:19:52.168882  994624 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:52.168906  994624 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:52.168931  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168944  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168948  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:52.168877  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168988  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:52.169048  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:52.169156  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:52.218220  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0830 22:19:52.218353  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.235432  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:52.235565  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0830 22:19:52.235575  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:52.235692  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:52.246243  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:52.246437  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0830 22:19:52.246550  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:52.260976  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0830 22:19:52.261024  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0830 22:19:52.261041  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.261090  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.261090  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:19:52.262450  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0830 22:19:52.316437  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0830 22:19:52.316556  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:19:52.316709  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0830 22:19:52.316807  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:19:52.330026  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0830 22:19:52.330185  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0830 22:19:52.330318  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:19:53.207917  995603 api_server.go:269] stopped: https://192.168.39.10:8443/healthz: Get "https://192.168.39.10:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0830 22:19:53.207968  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:54.224442  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:54.224482  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:54.724967  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:54.732845  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0830 22:19:54.732880  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0830 22:19:55.224677  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:55.231265  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0830 22:19:55.231302  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0830 22:19:55.725325  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:55.731785  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0830 22:19:55.739996  995603 api_server.go:141] control plane version: v1.16.0
	I0830 22:19:55.740025  995603 api_server.go:131] duration metric: took 7.533643458s to wait for apiserver health ...
	I0830 22:19:55.740037  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:55.740046  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:55.742083  995603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:52.462806  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:54.462856  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:56.962847  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:55.697808  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (3.436622341s)
	I0830 22:19:55.697847  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0830 22:19:55.697882  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1: (3.381312107s)
	I0830 22:19:55.697895  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0830 22:19:55.697927  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (3.436796784s)
	I0830 22:19:55.697959  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0830 22:19:55.697985  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1: (3.381155963s)
	I0830 22:19:55.698014  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0830 22:19:55.697989  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:55.698035  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.367694611s)
	I0830 22:19:55.698065  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0830 22:19:55.698072  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:57.158231  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.460131868s)
	I0830 22:19:57.158266  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0830 22:19:57.158302  994624 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:57.158371  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:55.743724  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:55.755829  995603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:55.777604  995603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:55.792182  995603 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:55.792221  995603 system_pods.go:61] "coredns-5644d7b6d9-872nn" [acd3b375-2486-48c3-9032-6386a091128a] Running
	I0830 22:19:55.792232  995603 system_pods.go:61] "coredns-5644d7b6d9-lqn5v" [48a574c1-b546-4060-9aba-1e2bcdaf7541] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:55.792240  995603 system_pods.go:61] "etcd-old-k8s-version-250163" [8d4eb3c4-a10b-4803-a1cd-28199081480d] Running
	I0830 22:19:55.792247  995603 system_pods.go:61] "kube-apiserver-old-k8s-version-250163" [c2cb0944-0836-4419-9bcf-8b6ddcb8de4f] Running
	I0830 22:19:55.792253  995603 system_pods.go:61] "kube-controller-manager-old-k8s-version-250163" [953d90e1-21ec-47a8-916a-9641616443a3] Running
	I0830 22:19:55.792259  995603 system_pods.go:61] "kube-proxy-qg82w" [58c1bd37-de42-46db-8337-cad3969dbbe3] Running
	I0830 22:19:55.792265  995603 system_pods.go:61] "kube-scheduler-old-k8s-version-250163" [ead115ca-3faa-457a-a29d-6de753bf53ab] Running
	I0830 22:19:55.792271  995603 system_pods.go:61] "storage-provisioner" [e481c13c-17b5-4a76-8f56-01decf4d2dde] Running
	I0830 22:19:55.792278  995603 system_pods.go:74] duration metric: took 14.654143ms to wait for pod list to return data ...
	I0830 22:19:55.792291  995603 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:55.800500  995603 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:55.800529  995603 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:55.800541  995603 node_conditions.go:105] duration metric: took 8.245305ms to run NodePressure ...
	I0830 22:19:55.800572  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:56.165598  995603 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:56.173177  995603 retry.go:31] will retry after 155.771258ms: kubelet not initialised
	I0830 22:19:56.335243  995603 retry.go:31] will retry after 435.88083ms: kubelet not initialised
	I0830 22:19:56.900108  995603 retry.go:31] will retry after 318.649581ms: kubelet not initialised
	I0830 22:19:57.226618  995603 retry.go:31] will retry after 906.607144ms: kubelet not initialised
	I0830 22:19:58.169644  995603 retry.go:31] will retry after 1.480507319s: kubelet not initialised
	I0830 22:19:59.662899  995603 retry.go:31] will retry after 1.43965579s: kubelet not initialised
	I0830 22:19:59.462944  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:01.463843  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:01.109412  995603 retry.go:31] will retry after 2.769965791s: kubelet not initialised
	I0830 22:20:03.884087  995603 retry.go:31] will retry after 5.524462984s: kubelet not initialised
	I0830 22:20:03.962393  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:06.463083  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:03.920494  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.762089682s)
	I0830 22:20:03.920528  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0830 22:20:03.920563  994624 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:20:03.920618  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:20:05.471647  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.551002795s)
	I0830 22:20:05.471696  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0830 22:20:05.471725  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:20:05.471808  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:20:07.437922  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.966087689s)
	I0830 22:20:07.437952  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0830 22:20:07.437986  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:20:07.438046  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:20:09.418426  995603 retry.go:31] will retry after 8.161662984s: kubelet not initialised
	I0830 22:20:08.961616  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:10.962062  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:09.894897  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.456819743s)
	I0830 22:20:09.894932  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0830 22:20:09.895001  994624 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:20:09.895072  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:20:10.848591  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0830 22:20:10.848635  994624 cache_images.go:123] Successfully loaded all cached images
	I0830 22:20:10.848641  994624 cache_images.go:92] LoadImages completed in 19.157171696s
	I0830 22:20:10.848726  994624 ssh_runner.go:195] Run: crio config
	I0830 22:20:10.912483  994624 cni.go:84] Creating CNI manager for ""
	I0830 22:20:10.912514  994624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:20:10.912545  994624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:20:10.912574  994624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.28 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-698195 NodeName:no-preload-698195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:20:10.912729  994624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-698195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:20:10.912793  994624 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-698195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-698195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:20:10.912850  994624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:20:10.922383  994624 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:20:10.922470  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:20:10.931904  994624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0830 22:20:10.947603  994624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:20:10.963835  994624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0830 22:20:10.982645  994624 ssh_runner.go:195] Run: grep 192.168.72.28	control-plane.minikube.internal$ /etc/hosts
	I0830 22:20:10.986493  994624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:20:10.999967  994624 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195 for IP: 192.168.72.28
	I0830 22:20:11.000000  994624 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:11.000190  994624 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:20:11.000252  994624 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:20:11.000348  994624 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.key
	I0830 22:20:11.000455  994624 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.key.f951a290
	I0830 22:20:11.000518  994624 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.key
	I0830 22:20:11.000668  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:20:11.000712  994624 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:20:11.000728  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:20:11.000844  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:20:11.000881  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:20:11.000917  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:20:11.000978  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:20:11.001876  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:20:11.025256  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:20:11.048414  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:20:11.072696  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:20:11.097029  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:20:11.123653  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:20:11.152564  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:20:11.180885  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:20:11.204194  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:20:11.227365  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:20:11.249804  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:20:11.272563  994624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:20:11.289225  994624 ssh_runner.go:195] Run: openssl version
	I0830 22:20:11.295235  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:20:11.304745  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.309554  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.309615  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.314775  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:20:11.327372  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:20:11.338944  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.344731  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.344797  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.350242  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:20:11.359913  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:20:11.369367  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.373467  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.373511  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.378731  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:20:11.387877  994624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:20:11.392496  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:20:11.398057  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:20:11.403555  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:20:11.409343  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:20:11.414914  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:20:11.420465  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:20:11.425887  994624 kubeadm.go:404] StartCluster: {Name:no-preload-698195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-698195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:20:11.425988  994624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:20:11.426031  994624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:20:11.458215  994624 cri.go:89] found id: ""
	I0830 22:20:11.458307  994624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:20:11.468981  994624 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:20:11.469010  994624 kubeadm.go:636] restartCluster start
	I0830 22:20:11.469068  994624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:20:11.478113  994624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:11.479707  994624 kubeconfig.go:92] found "no-preload-698195" server: "https://192.168.72.28:8443"
	I0830 22:20:11.483097  994624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:20:11.492068  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:11.492123  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:11.502752  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:11.502766  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:11.502803  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:11.514139  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:12.014881  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:12.014982  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:12.027078  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:12.514591  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:12.514686  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:12.529329  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.014971  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:13.015068  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:13.026874  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.514310  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:13.514395  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:13.526406  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.461372  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:15.961535  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:14.014646  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:14.014750  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:14.026467  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:14.515116  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:14.515212  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:14.527110  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:15.014622  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:15.014713  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:15.026083  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:15.515205  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:15.515304  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:15.530248  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:16.014368  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:16.014472  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:16.025785  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:16.514315  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:16.514390  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:16.525823  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.014305  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:17.014410  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:17.025657  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.515255  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:17.515331  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:17.527967  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:18.014524  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:18.014603  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:18.025912  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:18.514454  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:18.514533  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:18.526034  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.586022  995603 retry.go:31] will retry after 7.910874514s: kubelet not initialised
	I0830 22:20:18.460574  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:20.460727  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:19.014477  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:19.014563  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:19.025688  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:19.514231  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:19.514318  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:19.526253  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:20.014551  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:20.014632  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:20.026223  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:20.515044  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:20.515142  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:20.526336  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:21.014933  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:21.015017  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:21.026315  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:21.492708  994624 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:20:21.492739  994624 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:20:21.492755  994624 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:20:21.492837  994624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:20:21.528882  994624 cri.go:89] found id: ""
	I0830 22:20:21.528979  994624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:20:21.545258  994624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:20:21.554325  994624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:20:21.554387  994624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:20:21.563086  994624 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:20:21.563121  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:21.688507  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.342362  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.552586  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.618512  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.699936  994624 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:20:22.700029  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:22.715983  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:23.231090  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:23.730985  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:22.462833  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:24.462913  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:26.960795  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:24.230937  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:24.730685  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:25.230888  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:25.256876  994624 api_server.go:72] duration metric: took 2.556939469s to wait for apiserver process to appear ...
	I0830 22:20:25.256907  994624 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:20:25.256929  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:25.502804  995603 retry.go:31] will retry after 19.65596925s: kubelet not initialised
	I0830 22:20:28.908329  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:20:28.908366  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:20:28.908382  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:28.973483  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:20:28.973534  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:20:29.474026  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:29.480796  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:20:29.480850  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:20:29.974406  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:29.981421  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:20:29.981453  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:20:30.474452  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:30.479311  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0830 22:20:30.490550  994624 api_server.go:141] control plane version: v1.28.1
	I0830 22:20:30.490581  994624 api_server.go:131] duration metric: took 5.233664737s to wait for apiserver health ...
	I0830 22:20:30.490621  994624 cni.go:84] Creating CNI manager for ""
	I0830 22:20:30.490634  994624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:20:30.492834  994624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:20:28.962577  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:31.461661  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:30.494469  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:20:30.508611  994624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:20:30.536470  994624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:20:30.547285  994624 system_pods.go:59] 8 kube-system pods found
	I0830 22:20:30.547321  994624 system_pods.go:61] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:20:30.547339  994624 system_pods.go:61] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:20:30.547352  994624 system_pods.go:61] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:20:30.547361  994624 system_pods.go:61] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:20:30.547369  994624 system_pods.go:61] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:20:30.547379  994624 system_pods.go:61] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:20:30.547391  994624 system_pods.go:61] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:20:30.547405  994624 system_pods.go:61] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:20:30.547416  994624 system_pods.go:74] duration metric: took 10.921869ms to wait for pod list to return data ...
	I0830 22:20:30.547428  994624 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:20:30.550787  994624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:20:30.550816  994624 node_conditions.go:123] node cpu capacity is 2
	I0830 22:20:30.550828  994624 node_conditions.go:105] duration metric: took 3.391486ms to run NodePressure ...
	I0830 22:20:30.550856  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:30.786117  994624 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:20:30.793653  994624 kubeadm.go:787] kubelet initialised
	I0830 22:20:30.793680  994624 kubeadm.go:788] duration metric: took 7.533543ms waiting for restarted kubelet to initialise ...
	I0830 22:20:30.793694  994624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:30.800474  994624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.808844  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.808869  994624 pod_ready.go:81] duration metric: took 8.371156ms waiting for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.808879  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.808888  994624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.823461  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "etcd-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.823487  994624 pod_ready.go:81] duration metric: took 14.590789ms waiting for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.823497  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "etcd-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.823504  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.834123  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-apiserver-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.834150  994624 pod_ready.go:81] duration metric: took 10.63758ms waiting for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.834158  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-apiserver-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.834164  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.951589  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.951620  994624 pod_ready.go:81] duration metric: took 117.448834ms waiting for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.951628  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.951635  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:31.343471  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-proxy-5fjvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.343497  994624 pod_ready.go:81] duration metric: took 391.855831ms waiting for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:31.343506  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-proxy-5fjvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.343512  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:31.741491  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-scheduler-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.741527  994624 pod_ready.go:81] duration metric: took 398.007277ms waiting for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:31.741539  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-scheduler-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.741555  994624 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:32.141918  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:32.141952  994624 pod_ready.go:81] duration metric: took 400.379332ms waiting for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:32.141961  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:32.141969  994624 pod_ready.go:38] duration metric: took 1.348263054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:32.141987  994624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:20:32.153800  994624 ops.go:34] apiserver oom_adj: -16
	I0830 22:20:32.153828  994624 kubeadm.go:640] restartCluster took 20.684809572s
	I0830 22:20:32.153848  994624 kubeadm.go:406] StartCluster complete in 20.727972693s
	I0830 22:20:32.153868  994624 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:32.153955  994624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:20:32.155765  994624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:32.156054  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:20:32.156162  994624 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:20:32.156265  994624 addons.go:69] Setting storage-provisioner=true in profile "no-preload-698195"
	I0830 22:20:32.156285  994624 addons.go:231] Setting addon storage-provisioner=true in "no-preload-698195"
	I0830 22:20:32.156288  994624 addons.go:69] Setting default-storageclass=true in profile "no-preload-698195"
	I0830 22:20:32.156307  994624 addons.go:69] Setting metrics-server=true in profile "no-preload-698195"
	I0830 22:20:32.156344  994624 addons.go:231] Setting addon metrics-server=true in "no-preload-698195"
	I0830 22:20:32.156318  994624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-698195"
	I0830 22:20:32.156396  994624 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	W0830 22:20:32.156293  994624 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:20:32.156512  994624 host.go:66] Checking if "no-preload-698195" exists ...
	W0830 22:20:32.156358  994624 addons.go:240] addon metrics-server should already be in state true
	I0830 22:20:32.156570  994624 host.go:66] Checking if "no-preload-698195" exists ...
	I0830 22:20:32.156821  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156847  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.156849  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156867  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.156948  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156961  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.165443  994624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-698195" context rescaled to 1 replicas
	I0830 22:20:32.165497  994624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:20:32.167715  994624 out.go:177] * Verifying Kubernetes components...
	I0830 22:20:32.169310  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:20:32.176341  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0830 22:20:32.176876  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0830 22:20:32.177070  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0830 22:20:32.177253  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177447  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177562  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177829  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.177856  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.178014  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.178032  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.178387  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179460  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179499  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.179517  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.179897  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179957  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.179996  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.180272  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.180293  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.180423  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.201009  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0830 22:20:32.201548  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.201926  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0830 22:20:32.202180  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.202200  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.202304  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.202785  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.202842  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.202865  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.203052  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.203202  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.203391  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.204424  994624 addons.go:231] Setting addon default-storageclass=true in "no-preload-698195"
	W0830 22:20:32.204450  994624 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:20:32.204491  994624 host.go:66] Checking if "no-preload-698195" exists ...
	I0830 22:20:32.204897  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.204931  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.205076  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.207516  994624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:20:32.206126  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.209336  994624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:20:32.210840  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:20:32.209276  994624 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:20:32.210862  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:20:32.210877  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:20:32.210890  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.210897  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.214370  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214385  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214769  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.214813  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.214829  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214841  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.215131  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.215199  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.215346  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.215387  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.215521  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.215580  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.215651  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.215748  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.244173  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I0830 22:20:32.244664  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.245311  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.245343  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.245760  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.246361  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.246416  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.263737  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0830 22:20:32.264177  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.264737  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.264761  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.265106  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.265342  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.266996  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.267406  994624 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:20:32.267430  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:20:32.267454  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.270345  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.270799  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.270829  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.271021  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.271215  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.271403  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.271526  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.362089  994624 node_ready.go:35] waiting up to 6m0s for node "no-preload-698195" to be "Ready" ...
	I0830 22:20:32.362281  994624 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0830 22:20:32.371216  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:20:32.372220  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:20:32.372240  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:20:32.396916  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:20:32.396942  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:20:32.417651  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:20:32.430668  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:20:32.430699  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:20:32.476147  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:20:33.655453  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.284190116s)
	I0830 22:20:33.655495  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.237806074s)
	I0830 22:20:33.655515  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655532  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.655519  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655602  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.655854  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.655875  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.655885  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655894  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656045  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656082  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656095  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656115  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656160  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656169  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656180  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.656195  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656394  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656432  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656437  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656455  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.656465  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656729  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656741  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656754  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.802947  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326756295s)
	I0830 22:20:33.802994  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.803016  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.803349  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.803371  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.803381  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.803391  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.803393  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.803632  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.803682  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.803700  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.803720  994624 addons.go:467] Verifying addon metrics-server=true in "no-preload-698195"
	I0830 22:20:33.805489  994624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:20:33.462414  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:35.961487  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:33.806934  994624 addons.go:502] enable addons completed in 1.650789204s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:20:34.550814  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:36.551274  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:38.551355  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:37.963175  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:40.462510  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:39.550464  994624 node_ready.go:49] node "no-preload-698195" has status "Ready":"True"
	I0830 22:20:39.550505  994624 node_ready.go:38] duration metric: took 7.188369926s waiting for node "no-preload-698195" to be "Ready" ...
	I0830 22:20:39.550516  994624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:39.556533  994624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.562470  994624 pod_ready.go:92] pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:39.562498  994624 pod_ready.go:81] duration metric: took 5.934964ms waiting for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.562511  994624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.568348  994624 pod_ready.go:92] pod "etcd-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:39.568371  994624 pod_ready.go:81] duration metric: took 5.853085ms waiting for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.568380  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:41.593857  994624 pod_ready.go:102] pod "kube-apiserver-no-preload-698195" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:42.594544  994624 pod_ready.go:92] pod "kube-apiserver-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.594572  994624 pod_ready.go:81] duration metric: took 3.026185311s waiting for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.594586  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.599820  994624 pod_ready.go:92] pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.599844  994624 pod_ready.go:81] duration metric: took 5.249213ms waiting for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.599856  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.751073  994624 pod_ready.go:92] pod "kube-proxy-5fjvd" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.751096  994624 pod_ready.go:81] duration metric: took 151.233562ms waiting for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.751105  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:43.150620  994624 pod_ready.go:92] pod "kube-scheduler-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:43.150646  994624 pod_ready.go:81] duration metric: took 399.535323ms waiting for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:43.150656  994624 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.464235  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:44.960831  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:46.961923  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:45.458489  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:47.958322  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:45.165236  995603 kubeadm.go:787] kubelet initialised
	I0830 22:20:45.165261  995603 kubeadm.go:788] duration metric: took 48.999634631s waiting for restarted kubelet to initialise ...
	I0830 22:20:45.165269  995603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:45.170939  995603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.176235  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.176259  995603 pod_ready.go:81] duration metric: took 5.296469ms waiting for pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.176271  995603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.180703  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.180718  995603 pod_ready.go:81] duration metric: took 4.44114ms waiting for pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.180725  995603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.185225  995603 pod_ready.go:92] pod "etcd-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.185244  995603 pod_ready.go:81] duration metric: took 4.512736ms waiting for pod "etcd-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.185255  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.190403  995603 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.190425  995603 pod_ready.go:81] duration metric: took 5.162774ms waiting for pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.190436  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.564427  995603 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.564460  995603 pod_ready.go:81] duration metric: took 374.00421ms waiting for pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.564473  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qg82w" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.964836  995603 pod_ready.go:92] pod "kube-proxy-qg82w" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.964857  995603 pod_ready.go:81] duration metric: took 400.377393ms waiting for pod "kube-proxy-qg82w" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.964866  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:46.364023  995603 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:46.364046  995603 pod_ready.go:81] duration metric: took 399.172301ms waiting for pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:46.364060  995603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:48.672124  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:48.962198  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.461425  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:49.958485  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.959424  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.170855  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:53.172690  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:53.962708  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:56.461729  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:54.458026  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:56.458124  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:58.459811  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:55.669393  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:57.670454  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:59.670654  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:58.463098  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:00.962495  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:00.960274  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:03.457998  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:02.170872  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:04.670725  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:03.460674  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:05.461496  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:05.459727  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:07.959179  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:06.671066  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.169869  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:07.463765  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.961943  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.959351  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:12.458921  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:11.171435  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:13.171597  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:12.461881  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:14.961416  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:14.459572  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:16.960064  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:15.670176  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:18.170049  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:17.460985  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:19.462323  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:21.963325  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:19.459085  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:21.460169  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:20.671600  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:23.169931  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:24.464683  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:26.962740  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:23.958014  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:26.458502  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:28.458654  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:25.670985  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:28.171321  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:29.461798  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:31.961714  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:30.464431  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:32.958557  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:30.669588  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:32.670695  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.671313  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.463531  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:36.960658  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.960256  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:37.460047  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:37.168958  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:39.170995  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:38.961145  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:40.961870  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:39.958213  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:41.958373  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:41.670302  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:44.171198  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:43.461666  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:45.461738  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:44.459123  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:46.459226  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:48.459428  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:46.670708  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:48.671826  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:47.462306  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:49.462771  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:51.962010  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:50.958149  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:52.958493  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:51.169610  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:53.170386  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:54.461116  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:56.959735  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:54.959069  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:57.458784  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:55.172123  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:57.670323  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:59.671985  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:58.961225  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:00.961822  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:59.959058  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:01.959700  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:02.170880  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:04.171473  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:02.961938  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:05.461758  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:03.960213  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:06.458196  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:08.458500  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:06.671998  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:09.169979  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:07.962031  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:10.460716  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:10.960753  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:13.459638  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:11.669885  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:13.670821  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:12.461433  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:14.463156  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:16.961558  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:15.459765  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:17.959192  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:15.671350  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:18.170569  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:19.462375  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:21.961785  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:19.959308  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:22.457592  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:20.173424  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:22.671008  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:23.961985  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:25.962149  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:24.458343  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:26.958471  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:25.169264  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:27.181579  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:29.670923  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:27.964954  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:30.461530  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:29.458262  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:31.463334  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:32.171662  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:34.670239  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:32.961287  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:34.961787  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:33.957827  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:35.958367  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:37.960259  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:36.671642  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:39.169834  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:37.462107  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:39.961576  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:41.961773  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:40.458367  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:42.458710  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:41.671303  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:44.170994  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:43.964448  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.461777  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:44.958652  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.960005  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.171108  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:48.670866  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:48.462315  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:50.462456  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:49.459011  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:51.958137  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:51.170020  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:53.171135  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:52.462694  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:54.962055  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:53.958728  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:55.959278  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:57.959555  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:55.671421  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:58.169881  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:57.461322  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:59.461865  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:01.963541  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:00.458148  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:02.458834  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:00.170265  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:02.170719  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:04.670111  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:03.967458  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:05.972793  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:04.958722  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:07.458954  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:06.670434  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:08.671269  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:08.461195  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:10.961859  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:09.458999  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:11.958146  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:11.170482  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.670156  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.462648  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.463851  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.958659  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.962293  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:18.458707  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.670647  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:18.170462  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:17.960881  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:19.962032  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:20.959370  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:23.459653  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:20.670329  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:23.169817  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:22.461024  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:24.461537  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:26.960897  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:25.958696  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:28.459488  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:25.671024  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:28.170228  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:29.461009  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:31.461891  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:30.958318  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:32.958723  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:30.170683  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:32.670966  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:33.462005  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:35.960841  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:34.959278  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.458068  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:35.170093  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.671411  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.961501  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:40.460893  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:39.458824  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:41.461623  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:40.170169  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:42.670892  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:42.461840  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:43.154742  995192 pod_ready.go:81] duration metric: took 4m0.000931927s waiting for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	E0830 22:23:43.154776  995192 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:23:43.154798  995192 pod_ready.go:38] duration metric: took 4m7.830262728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:23:43.154853  995192 kubeadm.go:640] restartCluster took 4m30.336637887s
	W0830 22:23:43.154961  995192 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0830 22:23:43.155001  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0830 22:23:43.959940  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:46.458406  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:45.170898  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:47.670457  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:48.957451  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:51.457818  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:50.171371  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:52.171468  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:54.670175  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:53.958105  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:56.458176  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:57.169990  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:59.177173  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:58.957583  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:00.958404  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:02.958866  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:01.670484  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:03.671368  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:05.457466  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:07.457893  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:05.671480  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:08.170128  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:09.458376  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:11.958335  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:10.171221  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:12.171398  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:14.171694  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:14.432406  995192 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.277378744s)
	I0830 22:24:14.432498  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:14.446038  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:24:14.455354  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:24:14.464292  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:24:14.464332  995192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 22:24:14.680764  995192 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:24:13.965662  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:16.460984  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:16.171891  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:18.671072  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:18.958205  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:20.959096  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:23.459244  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:20.671733  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:22.671947  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:24.677772  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:24.927380  995192 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 22:24:24.927462  995192 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:24:24.927559  995192 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:24:24.927697  995192 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:24:24.927843  995192 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:24:24.927938  995192 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:24:24.929775  995192 out.go:204]   - Generating certificates and keys ...
	I0830 22:24:24.929895  995192 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:24:24.930004  995192 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:24:24.930118  995192 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 22:24:24.930202  995192 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0830 22:24:24.930321  995192 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0830 22:24:24.930408  995192 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0830 22:24:24.930485  995192 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0830 22:24:24.930559  995192 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0830 22:24:24.930658  995192 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 22:24:24.930756  995192 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 22:24:24.930821  995192 kubeadm.go:322] [certs] Using the existing "sa" key
	I0830 22:24:24.930922  995192 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:24:24.931009  995192 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:24:24.931077  995192 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:24:24.931170  995192 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:24:24.931245  995192 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:24:24.931354  995192 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:24:24.931430  995192 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:24:24.934341  995192 out.go:204]   - Booting up control plane ...
	I0830 22:24:24.934422  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:24:24.934524  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:24:24.934580  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:24:24.934689  995192 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:24:24.934770  995192 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:24:24.934809  995192 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:24:24.934936  995192 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:24:24.935014  995192 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003378 seconds
	I0830 22:24:24.935150  995192 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:24:24.935261  995192 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:24:24.935317  995192 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:24:24.935490  995192 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-791007 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 22:24:24.935540  995192 kubeadm.go:322] [bootstrap-token] Using token: 3t39h1.cgypp2756rpdn3ql
	I0830 22:24:24.937035  995192 out.go:204]   - Configuring RBAC rules ...
	I0830 22:24:24.937140  995192 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:24:24.937246  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 22:24:24.937428  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:24:24.937619  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:24:24.937762  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:24:24.937883  995192 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:24:24.938044  995192 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 22:24:24.938105  995192 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:24:24.938178  995192 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:24:24.938197  995192 kubeadm.go:322] 
	I0830 22:24:24.938277  995192 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:24:24.938290  995192 kubeadm.go:322] 
	I0830 22:24:24.938389  995192 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:24:24.938398  995192 kubeadm.go:322] 
	I0830 22:24:24.938429  995192 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:24:24.938506  995192 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:24:24.938577  995192 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:24:24.938586  995192 kubeadm.go:322] 
	I0830 22:24:24.938658  995192 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 22:24:24.938681  995192 kubeadm.go:322] 
	I0830 22:24:24.938745  995192 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 22:24:24.938754  995192 kubeadm.go:322] 
	I0830 22:24:24.938825  995192 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:24:24.938930  995192 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:24:24.939065  995192 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:24:24.939076  995192 kubeadm.go:322] 
	I0830 22:24:24.939160  995192 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 22:24:24.939266  995192 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:24:24.939280  995192 kubeadm.go:322] 
	I0830 22:24:24.939367  995192 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 3t39h1.cgypp2756rpdn3ql \
	I0830 22:24:24.939452  995192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:24:24.939473  995192 kubeadm.go:322] 	--control-plane 
	I0830 22:24:24.939479  995192 kubeadm.go:322] 
	I0830 22:24:24.939597  995192 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:24:24.939606  995192 kubeadm.go:322] 
	I0830 22:24:24.939685  995192 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 3t39h1.cgypp2756rpdn3ql \
	I0830 22:24:24.939848  995192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:24:24.939880  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:24:24.939916  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:24:24.942544  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:24:24.943961  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:24:24.990449  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:24:25.040966  995192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:24:25.041042  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.041041  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=default-k8s-diff-port-791007 minikube.k8s.io/updated_at=2023_08_30T22_24_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.441321  995192 ops.go:34] apiserver oom_adj: -16
	I0830 22:24:25.441492  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.557357  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:26.163303  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:26.663721  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.459794  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.957287  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.171894  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:29.671326  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.163474  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:27.664036  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:28.163187  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:28.663338  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.163719  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.663846  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:30.163288  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:30.663346  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:31.163165  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:31.663996  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.958583  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:31.960227  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:31.671923  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:34.171143  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:32.163631  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:32.663347  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:33.163634  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:33.663228  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:34.163600  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:34.663994  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:35.163597  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:35.663419  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:36.163764  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:36.663168  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:37.163646  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:37.663613  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:38.163643  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:38.264223  995192 kubeadm.go:1081] duration metric: took 13.22324453s to wait for elevateKubeSystemPrivileges.
	I0830 22:24:38.264262  995192 kubeadm.go:406] StartCluster complete in 5m25.484553135s
	I0830 22:24:38.264286  995192 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:24:38.264411  995192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:24:38.266553  995192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:24:38.266800  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:24:38.266990  995192 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:24:38.267105  995192 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267117  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:24:38.267126  995192 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.267141  995192 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:24:38.267163  995192 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267184  995192 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267209  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.267214  995192 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.267234  995192 addons.go:240] addon metrics-server should already be in state true
	I0830 22:24:38.267207  995192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-791007"
	I0830 22:24:38.267330  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.267664  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267735  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.267806  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267797  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267851  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.267866  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.285812  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0830 22:24:38.286287  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.287008  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.287036  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.287384  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0830 22:24:38.287488  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I0830 22:24:38.287526  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.287808  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.287949  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.288154  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.288200  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.288370  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.288516  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.288582  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.288562  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.288947  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.289135  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.289343  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.289569  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.289610  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.299364  995192 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.299392  995192 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:24:38.299422  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.299824  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.299861  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.305325  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0830 22:24:38.305834  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.306214  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I0830 22:24:38.306525  995192 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-791007" context rescaled to 1 replicas
	I0830 22:24:38.306561  995192 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:24:38.308424  995192 out.go:177] * Verifying Kubernetes components...
	I0830 22:24:38.306646  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.306688  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.309840  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:38.309911  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.310245  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.310362  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.310381  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.310433  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.310801  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.310980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.312319  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.314072  995192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:24:38.313018  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.315723  995192 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:24:38.315742  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:24:38.315759  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.317188  995192 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:24:34.457685  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:36.458268  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.459052  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:36.171434  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.173228  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.318441  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:24:38.318465  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:24:38.318488  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.319537  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.320338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.320365  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.320640  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.321238  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.321431  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.321733  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.322284  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.322691  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.322778  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.322887  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.323058  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.323205  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.323265  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.328412  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0830 22:24:38.328853  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.329468  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.329479  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.329898  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.330379  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.330395  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.345318  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0830 22:24:38.345781  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.346309  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.346329  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.346665  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.346886  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.348620  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.348922  995192 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:24:38.348941  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:24:38.348961  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.351758  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.352206  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.352233  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.352357  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.352562  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.352787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.352918  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.474078  995192 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-791007" to be "Ready" ...
	I0830 22:24:38.474205  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:24:38.479269  995192 node_ready.go:49] node "default-k8s-diff-port-791007" has status "Ready":"True"
	I0830 22:24:38.479294  995192 node_ready.go:38] duration metric: took 5.181356ms waiting for node "default-k8s-diff-port-791007" to be "Ready" ...
	I0830 22:24:38.479305  995192 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:38.486715  995192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ck692" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:38.508419  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:24:38.508443  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:24:38.515075  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:24:38.532789  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:24:38.549460  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:24:38.549488  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:24:38.593580  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:24:38.593614  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:24:38.637965  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:24:40.093211  995192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.618968297s)
	I0830 22:24:40.093259  995192 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0830 22:24:40.526723  995192 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ck692" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ck692" not found
	I0830 22:24:40.526748  995192 pod_ready.go:81] duration metric: took 2.040009497s waiting for pod "coredns-5dd5756b68-ck692" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:40.526757  995192 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ck692" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ck692" not found
	I0830 22:24:40.526765  995192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:40.552258  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.037149365s)
	I0830 22:24:40.552312  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019488451s)
	I0830 22:24:40.552317  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552381  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552351  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552696  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.552714  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.552724  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552734  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552891  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.552902  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.552918  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552927  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.553018  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.553114  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553132  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.553170  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.553202  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553210  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.553219  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.553225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.553478  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553493  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.776628  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.138598233s)
	I0830 22:24:40.776714  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.776731  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.777199  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.777224  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.777246  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.777256  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.777270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.777546  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.777626  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.777647  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.777667  995192 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-791007"
	I0830 22:24:40.779719  995192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:24:40.781134  995192 addons.go:502] enable addons completed in 2.51415908s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:24:40.459185  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:42.958731  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.150847  994624 pod_ready.go:81] duration metric: took 4m0.000170406s waiting for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:43.150881  994624 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:24:43.150893  994624 pod_ready.go:38] duration metric: took 4m3.600363648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:43.150919  994624 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:24:43.150964  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:43.151043  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:43.199383  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:43.199412  994624 cri.go:89] found id: ""
	I0830 22:24:43.199420  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:43.199479  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.204289  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:43.204371  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:43.247303  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:43.247329  994624 cri.go:89] found id: ""
	I0830 22:24:43.247340  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:43.247396  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.252955  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:43.253024  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:43.286292  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:43.286318  994624 cri.go:89] found id: ""
	I0830 22:24:43.286327  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:43.286386  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.290585  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:43.290653  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:43.323616  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:43.323645  994624 cri.go:89] found id: ""
	I0830 22:24:43.323655  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:43.323729  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.328256  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:43.328326  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:43.363566  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:43.363595  994624 cri.go:89] found id: ""
	I0830 22:24:43.363605  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:43.363666  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.368006  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:43.368067  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:43.403728  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:43.403752  994624 cri.go:89] found id: ""
	I0830 22:24:43.403761  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:43.403833  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.407957  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:43.408020  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:43.438864  994624 cri.go:89] found id: ""
	I0830 22:24:43.438893  994624 logs.go:284] 0 containers: []
	W0830 22:24:43.438903  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:43.438911  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:43.438976  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:43.478905  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:43.478935  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:43.478942  994624 cri.go:89] found id: ""
	I0830 22:24:43.478951  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:43.479015  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.486919  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.496040  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:43.496070  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:43.669727  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:43.669764  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:43.712471  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:43.712508  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:43.746949  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:43.746988  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:42.573674  995192 pod_ready.go:92] pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.573706  995192 pod_ready.go:81] duration metric: took 2.046935361s waiting for pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.573716  995192 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.579433  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.579450  995192 pod_ready.go:81] duration metric: took 5.72841ms waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.579458  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.584499  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.584519  995192 pod_ready.go:81] duration metric: took 5.055504ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.584527  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.678045  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.678071  995192 pod_ready.go:81] duration metric: took 93.537153ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.678084  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbdvk" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.082548  995192 pod_ready.go:92] pod "kube-proxy-bbdvk" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:43.082576  995192 pod_ready.go:81] duration metric: took 404.485397ms waiting for pod "kube-proxy-bbdvk" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.082585  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.479813  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:43.479840  995192 pod_ready.go:81] duration metric: took 397.248046ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.479851  995192 pod_ready.go:38] duration metric: took 5.000533366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:43.479872  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:24:43.479956  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:24:43.498558  995192 api_server.go:72] duration metric: took 5.191959207s to wait for apiserver process to appear ...
	I0830 22:24:43.498583  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:24:43.498603  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:24:43.504260  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:24:43.505566  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:24:43.505589  995192 api_server.go:131] duration metric: took 6.997863ms to wait for apiserver health ...
	I0830 22:24:43.505598  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:24:43.682798  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:24:43.682837  995192 system_pods.go:61] "coredns-5dd5756b68-jwn87" [984f4b65-9261-4952-a368-5fac2fa14bd7] Running
	I0830 22:24:43.682846  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [156cdcfd-fa81-4542-8506-18b3ab61f725] Running
	I0830 22:24:43.682856  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [841dcf3a-9ab5-4fbf-a20a-4179d4a793fd] Running
	I0830 22:24:43.682863  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [4cef1264-90fb-47fc-a155-4cb267c961aa] Running
	I0830 22:24:43.682870  995192 system_pods.go:61] "kube-proxy-bbdvk" [dd98a34a-f2f9-4e73-a751-e68a1addb89f] Running
	I0830 22:24:43.682876  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [11bf5dce-8d54-4029-a9d2-423e278b6181] Running
	I0830 22:24:43.682887  995192 system_pods.go:61] "metrics-server-57f55c9bc5-dllmg" [6826d918-a2ac-4744-8145-f6d7599499af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:43.682897  995192 system_pods.go:61] "storage-provisioner" [fb41168e-19d2-4b57-a2fb-ab0b3d0ff836] Running
	I0830 22:24:43.682909  995192 system_pods.go:74] duration metric: took 177.304345ms to wait for pod list to return data ...
	I0830 22:24:43.682919  995192 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:24:43.878616  995192 default_sa.go:45] found service account: "default"
	I0830 22:24:43.878643  995192 default_sa.go:55] duration metric: took 195.70884ms for default service account to be created ...
	I0830 22:24:43.878654  995192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:24:44.083123  995192 system_pods.go:86] 8 kube-system pods found
	I0830 22:24:44.083155  995192 system_pods.go:89] "coredns-5dd5756b68-jwn87" [984f4b65-9261-4952-a368-5fac2fa14bd7] Running
	I0830 22:24:44.083161  995192 system_pods.go:89] "etcd-default-k8s-diff-port-791007" [156cdcfd-fa81-4542-8506-18b3ab61f725] Running
	I0830 22:24:44.083165  995192 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-791007" [841dcf3a-9ab5-4fbf-a20a-4179d4a793fd] Running
	I0830 22:24:44.083170  995192 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-791007" [4cef1264-90fb-47fc-a155-4cb267c961aa] Running
	I0830 22:24:44.083177  995192 system_pods.go:89] "kube-proxy-bbdvk" [dd98a34a-f2f9-4e73-a751-e68a1addb89f] Running
	I0830 22:24:44.083181  995192 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-791007" [11bf5dce-8d54-4029-a9d2-423e278b6181] Running
	I0830 22:24:44.083187  995192 system_pods.go:89] "metrics-server-57f55c9bc5-dllmg" [6826d918-a2ac-4744-8145-f6d7599499af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:44.083194  995192 system_pods.go:89] "storage-provisioner" [fb41168e-19d2-4b57-a2fb-ab0b3d0ff836] Running
	I0830 22:24:44.083203  995192 system_pods.go:126] duration metric: took 204.542978ms to wait for k8s-apps to be running ...
	I0830 22:24:44.083216  995192 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:24:44.083297  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:44.098110  995192 system_svc.go:56] duration metric: took 14.88196ms WaitForService to wait for kubelet.
	I0830 22:24:44.098143  995192 kubeadm.go:581] duration metric: took 5.7915497s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:24:44.098211  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:24:44.278751  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:24:44.278802  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:24:44.278814  995192 node_conditions.go:105] duration metric: took 180.597923ms to run NodePressure ...
	I0830 22:24:44.278825  995192 start.go:228] waiting for startup goroutines ...
	I0830 22:24:44.278831  995192 start.go:233] waiting for cluster config update ...
	I0830 22:24:44.278841  995192 start.go:242] writing updated cluster config ...
	I0830 22:24:44.279208  995192 ssh_runner.go:195] Run: rm -f paused
	I0830 22:24:44.332074  995192 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:24:44.334502  995192 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-791007" cluster and "default" namespace by default
	I0830 22:24:40.672327  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.171136  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.780116  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:43.780147  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:43.824462  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:43.824494  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:43.875847  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:43.875881  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:43.937533  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:43.937582  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:43.950917  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:43.950948  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:43.989236  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:43.989265  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:44.025171  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:44.025218  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:44.644566  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:44.644609  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:44.692321  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:44.692356  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:47.229304  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:24:47.252442  994624 api_server.go:72] duration metric: took 4m15.086891336s to wait for apiserver process to appear ...
	I0830 22:24:47.252476  994624 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:24:47.252521  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:47.252593  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:47.286367  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:47.286397  994624 cri.go:89] found id: ""
	I0830 22:24:47.286410  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:47.286461  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.290812  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:47.290883  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:47.324349  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:47.324376  994624 cri.go:89] found id: ""
	I0830 22:24:47.324386  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:47.324440  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.329002  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:47.329072  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:47.362954  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:47.362985  994624 cri.go:89] found id: ""
	I0830 22:24:47.362996  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:47.363062  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.367498  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:47.367587  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:47.398450  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:47.398478  994624 cri.go:89] found id: ""
	I0830 22:24:47.398489  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:47.398550  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.402646  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:47.402741  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:47.438663  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:47.438691  994624 cri.go:89] found id: ""
	I0830 22:24:47.438701  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:47.438769  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.443046  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:47.443114  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:47.472698  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:47.472725  994624 cri.go:89] found id: ""
	I0830 22:24:47.472733  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:47.472792  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.477075  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:47.477150  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:47.507099  994624 cri.go:89] found id: ""
	I0830 22:24:47.507138  994624 logs.go:284] 0 containers: []
	W0830 22:24:47.507148  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:47.507157  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:47.507232  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:47.540635  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:47.540661  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:47.540667  994624 cri.go:89] found id: ""
	I0830 22:24:47.540676  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:47.540734  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.545274  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.549659  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:47.549681  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:47.605419  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:47.605460  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:47.646819  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:47.646856  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:47.684772  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:47.684801  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:47.731741  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:47.731791  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:47.762713  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:47.762745  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:48.266510  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:48.266557  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:48.315124  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:48.315164  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:48.332407  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:48.332447  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:48.463670  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:48.463710  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:48.498034  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:48.498067  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:48.528326  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:48.528372  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:48.563858  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:48.563893  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:45.670559  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:46.364206  995603 pod_ready.go:81] duration metric: took 4m0.000126235s waiting for pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:46.364246  995603 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:24:46.364267  995603 pod_ready.go:38] duration metric: took 4m1.19899008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:46.364298  995603 kubeadm.go:640] restartCluster took 5m11.375966766s
	W0830 22:24:46.364364  995603 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0830 22:24:46.364394  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0830 22:24:51.095064  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:24:51.106674  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0830 22:24:51.108320  994624 api_server.go:141] control plane version: v1.28.1
	I0830 22:24:51.108339  994624 api_server.go:131] duration metric: took 3.855856321s to wait for apiserver health ...
	I0830 22:24:51.108347  994624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:24:51.108375  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:51.108422  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:51.140030  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:51.140059  994624 cri.go:89] found id: ""
	I0830 22:24:51.140069  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:51.140133  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.144302  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:51.144375  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:51.181915  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:51.181944  994624 cri.go:89] found id: ""
	I0830 22:24:51.181953  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:51.182007  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.187104  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:51.187171  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:51.220763  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:51.220794  994624 cri.go:89] found id: ""
	I0830 22:24:51.220806  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:51.220890  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.225368  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:51.225443  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:51.263131  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:51.263155  994624 cri.go:89] found id: ""
	I0830 22:24:51.263164  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:51.263231  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.268531  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:51.268586  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:51.307119  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:51.307145  994624 cri.go:89] found id: ""
	I0830 22:24:51.307154  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:51.307224  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.311914  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:51.311988  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:51.341363  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:51.341391  994624 cri.go:89] found id: ""
	I0830 22:24:51.341402  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:51.341461  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.345501  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:51.345570  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:51.378276  994624 cri.go:89] found id: ""
	I0830 22:24:51.378311  994624 logs.go:284] 0 containers: []
	W0830 22:24:51.378322  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:51.378329  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:51.378398  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:51.416207  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:51.416228  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:51.416232  994624 cri.go:89] found id: ""
	I0830 22:24:51.416245  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:51.416295  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.421034  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.424911  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:51.424938  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:51.458543  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:51.458576  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:51.489189  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:51.489223  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:52.074879  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:52.074924  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:52.091316  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:52.091357  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:52.131564  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:52.131602  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:52.168850  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:52.168879  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:52.200329  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:52.200358  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:52.230767  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:52.230799  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:52.276139  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:52.276177  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:52.330487  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:52.330523  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:52.469305  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:52.469336  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:52.536395  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:52.536432  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:55.089149  994624 system_pods.go:59] 8 kube-system pods found
	I0830 22:24:55.089184  994624 system_pods.go:61] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running
	I0830 22:24:55.089194  994624 system_pods.go:61] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running
	I0830 22:24:55.089198  994624 system_pods.go:61] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running
	I0830 22:24:55.089203  994624 system_pods.go:61] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running
	I0830 22:24:55.089207  994624 system_pods.go:61] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running
	I0830 22:24:55.089211  994624 system_pods.go:61] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running
	I0830 22:24:55.089217  994624 system_pods.go:61] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:55.089224  994624 system_pods.go:61] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running
	I0830 22:24:55.089230  994624 system_pods.go:74] duration metric: took 3.980877363s to wait for pod list to return data ...
	I0830 22:24:55.089237  994624 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:24:55.091833  994624 default_sa.go:45] found service account: "default"
	I0830 22:24:55.091862  994624 default_sa.go:55] duration metric: took 2.618667ms for default service account to be created ...
	I0830 22:24:55.091871  994624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:24:55.098108  994624 system_pods.go:86] 8 kube-system pods found
	I0830 22:24:55.098145  994624 system_pods.go:89] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running
	I0830 22:24:55.098154  994624 system_pods.go:89] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running
	I0830 22:24:55.098163  994624 system_pods.go:89] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running
	I0830 22:24:55.098179  994624 system_pods.go:89] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running
	I0830 22:24:55.098190  994624 system_pods.go:89] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running
	I0830 22:24:55.098201  994624 system_pods.go:89] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running
	I0830 22:24:55.098212  994624 system_pods.go:89] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:55.098233  994624 system_pods.go:89] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running
	I0830 22:24:55.098241  994624 system_pods.go:126] duration metric: took 6.364144ms to wait for k8s-apps to be running ...
	I0830 22:24:55.098250  994624 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:24:55.098297  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:55.114382  994624 system_svc.go:56] duration metric: took 16.118629ms WaitForService to wait for kubelet.
	I0830 22:24:55.114413  994624 kubeadm.go:581] duration metric: took 4m22.94887118s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:24:55.114435  994624 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:24:55.118227  994624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:24:55.118256  994624 node_conditions.go:123] node cpu capacity is 2
	I0830 22:24:55.118272  994624 node_conditions.go:105] duration metric: took 3.832437ms to run NodePressure ...
	I0830 22:24:55.118287  994624 start.go:228] waiting for startup goroutines ...
	I0830 22:24:55.118295  994624 start.go:233] waiting for cluster config update ...
	I0830 22:24:55.118309  994624 start.go:242] writing updated cluster config ...
	I0830 22:24:55.118611  994624 ssh_runner.go:195] Run: rm -f paused
	I0830 22:24:55.169756  994624 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:24:55.172028  994624 out.go:177] * Done! kubectl is now configured to use "no-preload-698195" cluster and "default" namespace by default
	I0830 22:25:09.359961  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (22.995525599s)
	I0830 22:25:09.360040  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:25:09.375757  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:25:09.385118  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:25:09.394601  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:25:09.394640  995603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0830 22:25:09.454824  995603 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0830 22:25:09.455022  995603 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:25:09.599893  995603 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:25:09.600055  995603 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:25:09.600213  995603 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:25:09.783920  995603 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:25:09.784082  995603 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:25:09.793193  995603 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0830 22:25:09.902777  995603 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:25:09.904820  995603 out.go:204]   - Generating certificates and keys ...
	I0830 22:25:09.904937  995603 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:25:09.905035  995603 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:25:09.905150  995603 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 22:25:09.905241  995603 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0830 22:25:09.905350  995603 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0830 22:25:09.905423  995603 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0830 22:25:09.905540  995603 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0830 22:25:09.905622  995603 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0830 22:25:09.905799  995603 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 22:25:09.905918  995603 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 22:25:09.905978  995603 kubeadm.go:322] [certs] Using the existing "sa" key
	I0830 22:25:09.906052  995603 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:25:10.141265  995603 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:25:10.238428  995603 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:25:10.387118  995603 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:25:10.620307  995603 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:25:10.625802  995603 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:25:10.627926  995603 out.go:204]   - Booting up control plane ...
	I0830 22:25:10.629866  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:25:10.635839  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:25:10.638800  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:25:10.641079  995603 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:25:10.666312  995603 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:25:20.671894  995603 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004868 seconds
	I0830 22:25:20.672078  995603 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:25:20.687003  995603 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:25:21.215417  995603 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:25:21.215657  995603 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-250163 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0830 22:25:21.726398  995603 kubeadm.go:322] [bootstrap-token] Using token: y3ik1i.subqwfsto1ck6o9y
	I0830 22:25:21.728095  995603 out.go:204]   - Configuring RBAC rules ...
	I0830 22:25:21.728243  995603 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:25:21.735828  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:25:21.741247  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:25:21.744588  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:25:21.747966  995603 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:25:21.835002  995603 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:25:22.157106  995603 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:25:22.157129  995603 kubeadm.go:322] 
	I0830 22:25:22.157207  995603 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:25:22.157221  995603 kubeadm.go:322] 
	I0830 22:25:22.157343  995603 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:25:22.157373  995603 kubeadm.go:322] 
	I0830 22:25:22.157410  995603 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:25:22.157493  995603 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:25:22.157572  995603 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:25:22.157581  995603 kubeadm.go:322] 
	I0830 22:25:22.157661  995603 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:25:22.157779  995603 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:25:22.157877  995603 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:25:22.157894  995603 kubeadm.go:322] 
	I0830 22:25:22.158002  995603 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0830 22:25:22.158104  995603 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:25:22.158119  995603 kubeadm.go:322] 
	I0830 22:25:22.158250  995603 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y3ik1i.subqwfsto1ck6o9y \
	I0830 22:25:22.158415  995603 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:25:22.158457  995603 kubeadm.go:322]     --control-plane 	  
	I0830 22:25:22.158467  995603 kubeadm.go:322] 
	I0830 22:25:22.158555  995603 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:25:22.158566  995603 kubeadm.go:322] 
	I0830 22:25:22.158674  995603 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y3ik1i.subqwfsto1ck6o9y \
	I0830 22:25:22.158820  995603 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:25:22.159148  995603 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:25:22.159192  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:25:22.159205  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:25:22.160970  995603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:25:22.162353  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:25:22.173835  995603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:25:22.192193  995603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:25:22.192332  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=old-k8s-version-250163 minikube.k8s.io/updated_at=2023_08_30T22_25_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.192335  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.440832  995603 ops.go:34] apiserver oom_adj: -16
	I0830 22:25:22.441067  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.560349  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:23.171762  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:23.671955  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:24.171789  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:24.671863  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:25.172176  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:25.672262  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:26.172348  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:26.672680  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:27.171856  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:27.671722  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:28.171712  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:28.671959  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:29.171914  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:29.672320  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:30.171688  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:30.671958  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:31.172481  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:31.672528  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:32.172583  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:32.672562  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:33.171839  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:33.672125  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:34.172515  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:34.672643  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:35.172469  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:35.672444  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:36.171897  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:36.672260  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:37.171900  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:37.332591  995603 kubeadm.go:1081] duration metric: took 15.140354535s to wait for elevateKubeSystemPrivileges.
	I0830 22:25:37.332635  995603 kubeadm.go:406] StartCluster complete in 6m2.391789918s
	I0830 22:25:37.332659  995603 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:25:37.332770  995603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:25:37.334722  995603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:25:37.334991  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:25:37.335087  995603 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:25:37.335217  995603 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335241  995603 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-250163"
	W0830 22:25:37.335253  995603 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:25:37.335313  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.335317  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:25:37.335322  995603 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335342  995603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-250163"
	I0830 22:25:37.335345  995603 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335380  995603 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-250163"
	W0830 22:25:37.335391  995603 addons.go:240] addon metrics-server should already be in state true
	I0830 22:25:37.335440  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.335753  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335807  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.335807  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335847  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.335810  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335939  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.355619  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I0830 22:25:37.355760  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43941
	I0830 22:25:37.355979  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0830 22:25:37.356166  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356203  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356595  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356729  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.356748  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.356730  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.356793  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.357097  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.357114  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.357170  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357177  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357383  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.357486  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357825  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.357857  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.358246  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.358292  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.373639  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I0830 22:25:37.374107  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.374639  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.374657  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.375035  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.375359  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.377439  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.379303  995603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:25:37.378176  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0830 22:25:37.380617  995603 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-250163"
	W0830 22:25:37.380661  995603 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:25:37.380706  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.380787  995603 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:25:37.380802  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:25:37.380826  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.381081  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.381123  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.381726  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.382284  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.382304  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.382656  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.382878  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.384791  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.387018  995603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:25:37.385098  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.385806  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.388841  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.388863  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.388865  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:25:37.388883  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:25:37.388907  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.389015  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.389121  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.389274  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.392059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.392538  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.392557  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.392720  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.392861  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.392989  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.393101  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.399504  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I0830 22:25:37.399592  995603 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-250163" context rescaled to 1 replicas
	I0830 22:25:37.399627  995603 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:25:37.401322  995603 out.go:177] * Verifying Kubernetes components...
	I0830 22:25:37.400205  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.402915  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:25:37.403460  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.403485  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.403872  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.404488  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.404537  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.420598  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0830 22:25:37.421352  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.422218  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.422240  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.422714  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.422979  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.424750  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.425396  995603 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:25:37.425415  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:25:37.425439  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.428142  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.428731  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.428762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.428900  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.429077  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.429330  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.429469  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.705452  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:25:37.713345  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:25:37.736333  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:25:37.736356  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:25:37.825018  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:25:37.825051  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:25:37.858566  995603 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-250163" to be "Ready" ...
	I0830 22:25:37.858657  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:25:37.888050  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:25:37.888082  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:25:37.901662  995603 node_ready.go:49] node "old-k8s-version-250163" has status "Ready":"True"
	I0830 22:25:37.901689  995603 node_ready.go:38] duration metric: took 43.090996ms waiting for node "old-k8s-version-250163" to be "Ready" ...
	I0830 22:25:37.901701  995603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:25:37.928785  995603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:37.960479  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:25:39.232573  995603 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mx7ff" not found
	I0830 22:25:39.232603  995603 pod_ready.go:81] duration metric: took 1.303781463s waiting for pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace to be "Ready" ...
	E0830 22:25:39.232616  995603 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mx7ff" not found
	I0830 22:25:39.232630  995603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:39.305932  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.600438988s)
	I0830 22:25:39.306003  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306018  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306031  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592647384s)
	I0830 22:25:39.306084  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306106  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306088  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.447402831s)
	I0830 22:25:39.306222  995603 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0830 22:25:39.306459  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306481  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306485  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306512  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306518  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306534  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306517  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306608  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306628  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306638  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306862  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306903  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306911  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306946  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306972  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306981  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306993  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.307001  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.307338  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.307387  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.307407  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.425740  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.465201154s)
	I0830 22:25:39.425823  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.425844  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.426221  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.426260  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.426272  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.426289  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.426311  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.426584  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.426620  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.426638  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.426657  995603 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-250163"
	I0830 22:25:39.428628  995603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:25:39.430476  995603 addons.go:502] enable addons completed in 2.095405793s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:25:40.785067  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace has status "Ready":"True"
	I0830 22:25:40.785090  995603 pod_ready.go:81] duration metric: took 1.552452887s waiting for pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.785100  995603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-866k8" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.790132  995603 pod_ready.go:92] pod "kube-proxy-866k8" in "kube-system" namespace has status "Ready":"True"
	I0830 22:25:40.790158  995603 pod_ready.go:81] duration metric: took 5.051684ms waiting for pod "kube-proxy-866k8" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.790173  995603 pod_ready.go:38] duration metric: took 2.888452893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:25:40.790199  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:25:40.790247  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:25:40.805458  995603 api_server.go:72] duration metric: took 3.405792506s to wait for apiserver process to appear ...
	I0830 22:25:40.805488  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:25:40.805512  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:25:40.812389  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0830 22:25:40.813455  995603 api_server.go:141] control plane version: v1.16.0
	I0830 22:25:40.813483  995603 api_server.go:131] duration metric: took 7.983448ms to wait for apiserver health ...
	I0830 22:25:40.813520  995603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:25:40.818720  995603 system_pods.go:59] 4 kube-system pods found
	I0830 22:25:40.818741  995603 system_pods.go:61] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:40.818746  995603 system_pods.go:61] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:40.818754  995603 system_pods.go:61] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:40.818763  995603 system_pods.go:61] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:40.818768  995603 system_pods.go:74] duration metric: took 5.239623ms to wait for pod list to return data ...
	I0830 22:25:40.818776  995603 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:25:40.821982  995603 default_sa.go:45] found service account: "default"
	I0830 22:25:40.822001  995603 default_sa.go:55] duration metric: took 3.215755ms for default service account to be created ...
	I0830 22:25:40.822010  995603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:25:40.824823  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:40.824844  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:40.824850  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:40.824860  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:40.824871  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:40.824896  995603 retry.go:31] will retry after 244.703972ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.075793  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.075829  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.075838  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.075849  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.075860  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.075886  995603 retry.go:31] will retry after 325.650304ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.407202  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.407234  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.407242  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.407252  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.407262  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.407313  995603 retry.go:31] will retry after 449.708915ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.862007  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.862038  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.862043  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.862061  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.862070  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.862086  995603 retry.go:31] will retry after 484.451835ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:42.351597  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:42.351637  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:42.351646  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:42.351656  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:42.351664  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:42.351680  995603 retry.go:31] will retry after 739.711019ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:43.096340  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:43.096365  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:43.096371  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:43.096380  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:43.096387  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:43.096402  995603 retry.go:31] will retry after 871.763135ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:43.974914  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:43.974947  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:43.974954  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:43.974964  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:43.974973  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:43.974994  995603 retry.go:31] will retry after 1.11275286s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:45.093268  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:45.093293  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:45.093299  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:45.093306  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:45.093313  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:45.093329  995603 retry.go:31] will retry after 1.015840649s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:46.114920  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:46.114954  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:46.114961  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:46.114972  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:46.114982  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:46.115002  995603 retry.go:31] will retry after 1.822388925s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:47.942838  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:47.942870  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:47.942877  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:47.942887  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:47.942900  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:47.942920  995603 retry.go:31] will retry after 1.516432463s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:49.464430  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:49.464460  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:49.464465  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:49.464473  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:49.464480  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:49.464496  995603 retry.go:31] will retry after 2.558675876s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:52.028440  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:52.028469  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:52.028474  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:52.028481  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:52.028488  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:52.028503  995603 retry.go:31] will retry after 2.801664105s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:54.835174  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:54.835200  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:54.835205  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:54.835212  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:54.835219  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:54.835243  995603 retry.go:31] will retry after 3.386411543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:58.228062  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:58.228104  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:58.228113  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:58.228123  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:58.228136  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:58.228158  995603 retry.go:31] will retry after 5.58749509s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:03.822486  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:26:03.822511  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:03.822516  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:03.822523  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:03.822530  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:03.822548  995603 retry.go:31] will retry after 6.26222599s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:10.092537  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:26:10.092563  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:10.092569  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:10.092576  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:10.092582  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:10.092599  995603 retry.go:31] will retry after 6.680813015s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:16.780093  995603 system_pods.go:86] 5 kube-system pods found
	I0830 22:26:16.780120  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:16.780125  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Pending
	I0830 22:26:16.780130  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:16.780138  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:16.780145  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:16.780161  995603 retry.go:31] will retry after 9.963152707s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:26.749177  995603 system_pods.go:86] 7 kube-system pods found
	I0830 22:26:26.749205  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:26.749211  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Running
	I0830 22:26:26.749215  995603 system_pods.go:89] "kube-controller-manager-old-k8s-version-250163" [dfb636c2-5a87-4d9a-97c0-2fd763d52b69] Running
	I0830 22:26:26.749219  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:26.749223  995603 system_pods.go:89] "kube-scheduler-old-k8s-version-250163" [9d0c93a7-5cad-4a40-9d3d-3b828e33dca9] Pending
	I0830 22:26:26.749230  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:26.749237  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:26.749252  995603 retry.go:31] will retry after 8.744971537s: missing components: etcd, kube-scheduler
	I0830 22:26:35.500731  995603 system_pods.go:86] 8 kube-system pods found
	I0830 22:26:35.500759  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:35.500765  995603 system_pods.go:89] "etcd-old-k8s-version-250163" [260642d3-280e-4ae1-97bc-d15a904b3205] Running
	I0830 22:26:35.500769  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Running
	I0830 22:26:35.500775  995603 system_pods.go:89] "kube-controller-manager-old-k8s-version-250163" [dfb636c2-5a87-4d9a-97c0-2fd763d52b69] Running
	I0830 22:26:35.500779  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:35.500783  995603 system_pods.go:89] "kube-scheduler-old-k8s-version-250163" [9d0c93a7-5cad-4a40-9d3d-3b828e33dca9] Running
	I0830 22:26:35.500789  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:35.500796  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:35.500813  995603 system_pods.go:126] duration metric: took 54.67879848s to wait for k8s-apps to be running ...
	I0830 22:26:35.500827  995603 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:26:35.500876  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:26:35.519861  995603 system_svc.go:56] duration metric: took 19.021631ms WaitForService to wait for kubelet.
	I0830 22:26:35.519900  995603 kubeadm.go:581] duration metric: took 58.120243521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:26:35.519985  995603 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:26:35.524455  995603 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:26:35.524486  995603 node_conditions.go:123] node cpu capacity is 2
	I0830 22:26:35.524537  995603 node_conditions.go:105] duration metric: took 4.543152ms to run NodePressure ...
	I0830 22:26:35.524550  995603 start.go:228] waiting for startup goroutines ...
	I0830 22:26:35.524562  995603 start.go:233] waiting for cluster config update ...
	I0830 22:26:35.524573  995603 start.go:242] writing updated cluster config ...
	I0830 22:26:35.524938  995603 ssh_runner.go:195] Run: rm -f paused
	I0830 22:26:35.578723  995603 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0830 22:26:35.580954  995603 out.go:177] 
	W0830 22:26:35.582332  995603 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0830 22:26:35.583700  995603 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0830 22:26:35.585290  995603 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-250163" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:19:17 UTC, ends at Wed 2023-08-30 22:35:37 UTC. --
	Aug 30 22:35:36 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:36.460763076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=500fd8a3-4a76-4d8a-a817-3211412cbea8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:36 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:36.970358594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4eca4259-7906-47bc-b75a-10798f9c1030 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:36 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:36.970435065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4eca4259-7906-47bc-b75a-10798f9c1030 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:36 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:36.970588204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4eca4259-7906-47bc-b75a-10798f9c1030 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.005586623Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cbfe4290-abad-4442-b719-425fbb2e2015 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.005750913Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cbfe4290-abad-4442-b719-425fbb2e2015 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.005990921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cbfe4290-abad-4442-b719-425fbb2e2015 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.040350877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2f9ffc3c-22a7-4a3c-8ddc-60d5cd964f02 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.040434368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f9ffc3c-22a7-4a3c-8ddc-60d5cd964f02 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.040587387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f9ffc3c-22a7-4a3c-8ddc-60d5cd964f02 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.077534608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ce14126d-2dc3-40db-8ff6-59670a30fdbd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.077648061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ce14126d-2dc3-40db-8ff6-59670a30fdbd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.077821652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ce14126d-2dc3-40db-8ff6-59670a30fdbd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.111083576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b31e577-cce6-456d-a0e6-c86bb5dd7e1f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.111227032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b31e577-cce6-456d-a0e6-c86bb5dd7e1f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.111377783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b31e577-cce6-456d-a0e6-c86bb5dd7e1f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.148881720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=59bb48c4-b490-4af3-aba2-5e2951f2c9b7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.149027564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=59bb48c4-b490-4af3-aba2-5e2951f2c9b7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.149219393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=59bb48c4-b490-4af3-aba2-5e2951f2c9b7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.184246997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=111c240b-0c75-47fc-b32b-8beccc1482e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.184309610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=111c240b-0c75-47fc-b32b-8beccc1482e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.184470738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=111c240b-0c75-47fc-b32b-8beccc1482e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.221566260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e37fb368-74de-4d96-aad0-51b1a91e6154 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.221669894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e37fb368-74de-4d96-aad0-51b1a91e6154 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:35:37 old-k8s-version-250163 crio[727]: time="2023-08-30 22:35:37.221832048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e37fb368-74de-4d96-aad0-51b1a91e6154 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	9f802bfb55765       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   6cc013a0480f2
	e431d8a4958cb       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   9 minutes ago       Running             kube-proxy                0                   4f96a206b5b26
	dc635d8d2b1fd       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   9 minutes ago       Running             coredns                   0                   e1d50f90c4835
	89270eb7de796       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   9b942bb4cce90
	f57bf075c4b20       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   a85229d1e4c0e
	39b4851cb8055       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   fadf3ac872dc6
	920aba79a414a       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   f6a8673bc69b4
	
	* 
	* ==> coredns [dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24] <==
	* .:53
	2023-08-30T22:25:38.870Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-08-30T22:25:38.870Z [INFO] CoreDNS-1.6.2
	2023-08-30T22:25:38.870Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-08-30T22:26:11.240Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	2023-08-30T22:26:11.250Z [INFO] 127.0.0.1:57536 - 23550 "HINFO IN 1862811921159354688.5496499725226021289. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009990067s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-250163
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-250163
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=old-k8s-version-250163
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T22_25_22_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:35:17 +0000   Wed, 30 Aug 2023 22:25:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:35:17 +0000   Wed, 30 Aug 2023 22:25:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:35:17 +0000   Wed, 30 Aug 2023 22:25:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:35:17 +0000   Wed, 30 Aug 2023 22:25:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    old-k8s-version-250163
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 62beb253a8dc41729f656e941cb2e92f
	 System UUID:                62beb253-a8dc-4172-9f65-6e941cb2e92f
	 Boot ID:                    8af8a3e6-a3bd-4c5e-a24c-b628d1ae9309
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-ntb45                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-250163                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                kube-apiserver-old-k8s-version-250163             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                kube-controller-manager-old-k8s-version-250163    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                kube-proxy-866k8                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-250163             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                metrics-server-74d5856cc6-h6bcw                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m57s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-250163     Node old-k8s-version-250163 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-250163     Node old-k8s-version-250163 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet, old-k8s-version-250163     Node old-k8s-version-250163 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m58s              kube-proxy, old-k8s-version-250163  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug30 22:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075294] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.445741] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.478783] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155071] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.599133] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.294424] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.112730] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.209919] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.124730] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.254347] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +20.207259] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.405368] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug30 22:20] kauditd_printk_skb: 18 callbacks suppressed
	[Aug30 22:25] systemd-fstab-generator[3164]: Ignoring "noauto" for root device
	[  +0.723412] kauditd_printk_skb: 6 callbacks suppressed
	[Aug30 22:26] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae] <==
	* 2023-08-30 22:25:13.394268 I | raft: newRaft f8926bd555ec3d0e [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-08-30 22:25:13.394327 I | raft: f8926bd555ec3d0e became follower at term 1
	2023-08-30 22:25:13.402969 W | auth: simple token is not cryptographically signed
	2023-08-30 22:25:13.410882 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-08-30 22:25:13.412058 I | etcdserver: f8926bd555ec3d0e as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-08-30 22:25:13.412395 I | etcdserver/membership: added member f8926bd555ec3d0e [https://192.168.39.10:2380] to cluster 3a710b3f69152e32
	2023-08-30 22:25:13.414026 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-30 22:25:13.414410 I | embed: listening for metrics on http://192.168.39.10:2381
	2023-08-30 22:25:13.414617 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-30 22:25:13.794762 I | raft: f8926bd555ec3d0e is starting a new election at term 1
	2023-08-30 22:25:13.794851 I | raft: f8926bd555ec3d0e became candidate at term 2
	2023-08-30 22:25:13.794877 I | raft: f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 2
	2023-08-30 22:25:13.795056 I | raft: f8926bd555ec3d0e became leader at term 2
	2023-08-30 22:25:13.795180 I | raft: raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 2
	2023-08-30 22:25:13.795745 I | etcdserver: published {Name:old-k8s-version-250163 ClientURLs:[https://192.168.39.10:2379]} to cluster 3a710b3f69152e32
	2023-08-30 22:25:13.795790 I | embed: ready to serve client requests
	2023-08-30 22:25:13.796178 I | etcdserver: setting up the initial cluster version to 3.3
	2023-08-30 22:25:13.796277 I | embed: ready to serve client requests
	2023-08-30 22:25:13.797588 I | embed: serving client requests on 192.168.39.10:2379
	2023-08-30 22:25:13.797740 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-30 22:25:13.809534 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-08-30 22:25:13.809636 I | etcdserver/api: enabled capabilities for version 3.3
	2023-08-30 22:25:38.807571 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (143.159377ms) to execute
	2023-08-30 22:35:13.831432 I | mvcc: store.index: compact 666
	2023-08-30 22:35:13.833301 I | mvcc: finished scheduled compaction at 666 (took 1.478049ms)
	
	* 
	* ==> kernel <==
	*  22:35:37 up 16 min,  0 users,  load average: 0.00, 0.09, 0.14
	Linux old-k8s-version-250163 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea] <==
	* I0830 22:28:40.732725       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0830 22:28:40.732950       1 handler_proxy.go:99] no RequestInfo found in the context
	E0830 22:28:40.733037       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:28:40.733060       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:30:17.984631       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0830 22:30:17.984743       1 handler_proxy.go:99] no RequestInfo found in the context
	E0830 22:30:17.984798       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:30:17.984806       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:31:17.985281       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0830 22:31:17.985540       1 handler_proxy.go:99] no RequestInfo found in the context
	E0830 22:31:17.985598       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:31:17.985619       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:33:17.986015       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0830 22:33:17.986164       1 handler_proxy.go:99] no RequestInfo found in the context
	E0830 22:33:17.986222       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:33:17.986241       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:35:17.986821       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0830 22:35:17.987065       1 handler_proxy.go:99] no RequestInfo found in the context
	E0830 22:35:17.987141       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:35:17.987149       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0] <==
	* E0830 22:29:08.766366       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:29:22.094138       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:29:39.018239       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:29:54.096118       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:30:09.270457       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:30:26.098530       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:30:39.524184       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:30:58.100622       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:31:09.776364       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:31:30.102784       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:31:40.028460       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:32:02.104768       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:32:10.280492       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:32:34.107270       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:32:40.532724       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:33:06.109154       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:33:10.788005       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:33:38.111225       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:33:41.040814       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:34:10.113181       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:34:11.293155       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0830 22:34:41.551119       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:34:42.115267       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:35:11.803277       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:35:14.117036       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b] <==
	* W0830 22:25:39.532741       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0830 22:25:39.542057       1 node.go:135] Successfully retrieved node IP: 192.168.39.10
	I0830 22:25:39.542086       1 server_others.go:149] Using iptables Proxier.
	I0830 22:25:39.542546       1 server.go:529] Version: v1.16.0
	I0830 22:25:39.552279       1 config.go:313] Starting service config controller
	I0830 22:25:39.552497       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0830 22:25:39.552653       1 config.go:131] Starting endpoints config controller
	I0830 22:25:39.552760       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0830 22:25:39.652843       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0830 22:25:39.653283       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8] <==
	* W0830 22:25:17.018003       1 authentication.go:79] Authentication is disabled
	I0830 22:25:17.018028       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0830 22:25:17.018468       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0830 22:25:17.056751       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 22:25:17.058179       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 22:25:17.058318       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 22:25:17.060687       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 22:25:17.060876       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 22:25:17.061185       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 22:25:17.063272       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 22:25:17.063343       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 22:25:17.063375       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 22:25:17.068657       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 22:25:17.070819       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 22:25:18.058342       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 22:25:18.059758       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 22:25:18.060674       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 22:25:18.062000       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 22:25:18.064494       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 22:25:18.069331       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 22:25:18.069978       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 22:25:18.071137       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 22:25:18.073391       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 22:25:18.074566       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 22:25:18.076009       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:19:17 UTC, ends at Wed 2023-08-30 22:35:37 UTC. --
	Aug 30 22:31:00 old-k8s-version-250163 kubelet[3170]: E0830 22:31:00.457525    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:31:12 old-k8s-version-250163 kubelet[3170]: E0830 22:31:12.457068    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:31:23 old-k8s-version-250163 kubelet[3170]: E0830 22:31:23.479350    3170 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 30 22:31:23 old-k8s-version-250163 kubelet[3170]: E0830 22:31:23.479426    3170 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 30 22:31:23 old-k8s-version-250163 kubelet[3170]: E0830 22:31:23.479476    3170 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 30 22:31:23 old-k8s-version-250163 kubelet[3170]: E0830 22:31:23.479503    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Aug 30 22:31:34 old-k8s-version-250163 kubelet[3170]: E0830 22:31:34.459731    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:31:49 old-k8s-version-250163 kubelet[3170]: E0830 22:31:49.461238    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:32:03 old-k8s-version-250163 kubelet[3170]: E0830 22:32:03.457575    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:32:15 old-k8s-version-250163 kubelet[3170]: E0830 22:32:15.457362    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:32:29 old-k8s-version-250163 kubelet[3170]: E0830 22:32:29.457611    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:32:42 old-k8s-version-250163 kubelet[3170]: E0830 22:32:42.457289    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:32:53 old-k8s-version-250163 kubelet[3170]: E0830 22:32:53.457838    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:33:08 old-k8s-version-250163 kubelet[3170]: E0830 22:33:08.457534    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:33:22 old-k8s-version-250163 kubelet[3170]: E0830 22:33:22.457191    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:33:37 old-k8s-version-250163 kubelet[3170]: E0830 22:33:37.457714    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:33:51 old-k8s-version-250163 kubelet[3170]: E0830 22:33:51.456997    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:34:04 old-k8s-version-250163 kubelet[3170]: E0830 22:34:04.458035    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:34:19 old-k8s-version-250163 kubelet[3170]: E0830 22:34:19.457980    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:34:33 old-k8s-version-250163 kubelet[3170]: E0830 22:34:33.458149    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:34:45 old-k8s-version-250163 kubelet[3170]: E0830 22:34:45.457460    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:34:56 old-k8s-version-250163 kubelet[3170]: E0830 22:34:56.457787    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:35:10 old-k8s-version-250163 kubelet[3170]: E0830 22:35:10.541857    3170 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Aug 30 22:35:11 old-k8s-version-250163 kubelet[3170]: E0830 22:35:11.456835    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:35:25 old-k8s-version-250163 kubelet[3170]: E0830 22:35:25.457021    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017] <==
	* I0830 22:25:40.143108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 22:25:40.163233       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 22:25:40.163498       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 22:25:40.173805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 22:25:40.174786       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-250163_9155072c-a4d7-4e6f-9b3d-499b40e038a6!
	I0830 22:25:40.176941       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfc689f5-cdce-4b1f-82e2-4c32d1ad584d", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-250163_9155072c-a4d7-4e6f-9b3d-499b40e038a6 became leader
	I0830 22:25:40.275060       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-250163_9155072c-a4d7-4e6f-9b3d-499b40e038a6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-250163 -n old-k8s-version-250163
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-250163 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-h6bcw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-250163 describe pod metrics-server-74d5856cc6-h6bcw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-250163 describe pod metrics-server-74d5856cc6-h6bcw: exit status 1 (70.20405ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-h6bcw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-250163 describe pod metrics-server-74d5856cc6-h6bcw: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (560.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:31:49.715263  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:31:57.076796  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:33:12.762863  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:34:22.734703  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:35:45.786487  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:36:49.715964  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
E0830 22:36:57.076804  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.159:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208903 -n embed-certs-208903
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208903 -n embed-certs-208903: exit status 2 (284.48158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "embed-certs-208903" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-208903 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-208903 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.913µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-208903 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-208903 -n embed-certs-208903: exit status 2 (287.874925ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-208903 logs -n 25
E0830 22:39:18.793851  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:18.799140  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:18.809421  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:18.829712  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:18.870585  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:18.950928  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:19.111394  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:19.431844  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:20.072518  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:21.353045  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:22.735050  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 22:39:23.914139  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:29.034789  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:39.275368  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
E0830 22:39:59.756531  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-208903 logs -n 25: (54.776216902s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:12 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-698195             | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208903            | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-791007  | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC | 30 Aug 23 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC |                     |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-698195                  | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208903                 | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC | 30 Aug 23 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-250163        | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC | 30 Aug 23 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-791007       | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC | 30 Aug 23 22:24 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-250163             | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC | 30 Aug 23 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:37 UTC | 30 Aug 23 22:37 UTC |
	| start   | -p newest-cni-618803 --memory=2200 --alsologtostderr   | newest-cni-618803            | jenkins | v1.31.2 | 30 Aug 23 22:37 UTC | 30 Aug 23 22:38 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:38 UTC | 30 Aug 23 22:38 UTC |
	| start   | -p auto-051361 --memory=3072                           | auto-051361                  | jenkins | v1.31.2 | 30 Aug 23 22:38 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-618803             | newest-cni-618803            | jenkins | v1.31.2 | 30 Aug 23 22:38 UTC | 30 Aug 23 22:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-618803                                   | newest-cni-618803            | jenkins | v1.31.2 | 30 Aug 23 22:38 UTC | 30 Aug 23 22:39 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-618803                  | newest-cni-618803            | jenkins | v1.31.2 | 30 Aug 23 22:39 UTC | 30 Aug 23 22:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-618803 --memory=2200 --alsologtostderr   | newest-cni-618803            | jenkins | v1.31.2 | 30 Aug 23 22:39 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:39:05
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:39:05.584030 1001366 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:39:05.584168 1001366 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:39:05.584177 1001366 out.go:309] Setting ErrFile to fd 2...
	I0830 22:39:05.584181 1001366 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:39:05.584384 1001366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:39:05.585012 1001366 out.go:303] Setting JSON to false
	I0830 22:39:05.586036 1001366 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15693,"bootTime":1693419453,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:39:05.586121 1001366 start.go:138] virtualization: kvm guest
	I0830 22:39:05.588839 1001366 out.go:177] * [newest-cni-618803] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:39:05.590495 1001366 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:39:05.592073 1001366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:39:05.590578 1001366 notify.go:220] Checking for updates...
	I0830 22:39:05.593757 1001366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:39:05.595129 1001366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:39:05.596485 1001366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:39:05.597927 1001366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:39:05.599859 1001366 config.go:182] Loaded profile config "newest-cni-618803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:39:05.600254 1001366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:39:05.600353 1001366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:39:05.616623 1001366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46085
	I0830 22:39:05.617058 1001366 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:39:05.617658 1001366 main.go:141] libmachine: Using API Version  1
	I0830 22:39:05.617687 1001366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:39:05.617997 1001366 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:39:05.618182 1001366 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:39:05.618417 1001366 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:39:05.618706 1001366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:39:05.618756 1001366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:39:05.633595 1001366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41245
	I0830 22:39:05.634091 1001366 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:39:05.634630 1001366 main.go:141] libmachine: Using API Version  1
	I0830 22:39:05.634671 1001366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:39:05.635003 1001366 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:39:05.635183 1001366 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:39:05.671299 1001366 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:39:05.672781 1001366 start.go:298] selected driver: kvm2
	I0830 22:39:05.672807 1001366 start.go:902] validating driver "kvm2" against &{Name:newest-cni-618803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-618803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:fal
se system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:39:05.672926 1001366 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:39:05.673829 1001366 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:39:05.673915 1001366 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:39:05.688718 1001366 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:39:05.689242 1001366 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0830 22:39:05.689285 1001366 cni.go:84] Creating CNI manager for ""
	I0830 22:39:05.689298 1001366 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:39:05.689307 1001366 start_flags.go:319] config:
	{Name:newest-cni-618803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-618803 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:39:05.689515 1001366 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:39:05.691592 1001366 out.go:177] * Starting control plane node newest-cni-618803 in cluster newest-cni-618803
	I0830 22:39:05.693054 1001366 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:39:05.693095 1001366 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0830 22:39:05.693109 1001366 cache.go:57] Caching tarball of preloaded images
	I0830 22:39:05.693201 1001366 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:39:05.693214 1001366 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 22:39:05.693376 1001366 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/config.json ...
	I0830 22:39:05.693591 1001366 start.go:365] acquiring machines lock for newest-cni-618803: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:39:05.693643 1001366 start.go:369] acquired machines lock for "newest-cni-618803" in 27.864µs
	I0830 22:39:05.693667 1001366 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:39:05.693673 1001366 fix.go:54] fixHost starting: 
	I0830 22:39:05.694072 1001366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:39:05.694112 1001366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:39:05.708739 1001366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0830 22:39:05.709206 1001366 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:39:05.709738 1001366 main.go:141] libmachine: Using API Version  1
	I0830 22:39:05.709763 1001366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:39:05.710087 1001366 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:39:05.710274 1001366 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:39:05.710427 1001366 main.go:141] libmachine: (newest-cni-618803) Calling .GetState
	I0830 22:39:05.712240 1001366 fix.go:102] recreateIfNeeded on newest-cni-618803: state=Stopped err=<nil>
	I0830 22:39:05.712267 1001366 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	W0830 22:39:05.712453 1001366 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:39:05.715160 1001366 out.go:177] * Restarting existing kvm2 VM for "newest-cni-618803" ...
	I0830 22:39:03.433395 1000926 out.go:204]   - Generating certificates and keys ...
	I0830 22:39:03.433528 1000926 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:39:03.433620 1000926 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:39:03.593012 1000926 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 22:39:04.000252 1000926 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 22:39:04.253463 1000926 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 22:39:04.339899 1000926 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 22:39:04.534719 1000926 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 22:39:04.534935 1000926 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-051361 localhost] and IPs [192.168.72.212 127.0.0.1 ::1]
	I0830 22:39:04.768789 1000926 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 22:39:04.768966 1000926 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-051361 localhost] and IPs [192.168.72.212 127.0.0.1 ::1]
	I0830 22:39:04.908340 1000926 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 22:39:05.164524 1000926 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 22:39:05.258906 1000926 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 22:39:05.259014 1000926 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:39:05.531192 1000926 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:39:05.650517 1000926 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:39:05.964317 1000926 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:39:06.055511 1000926 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:39:06.056119 1000926 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:39:06.058440 1000926 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:39:06.060778 1000926 out.go:204]   - Booting up control plane ...
	I0830 22:39:06.060916 1000926 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:39:06.061024 1000926 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:39:06.062394 1000926 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:39:06.077278 1000926 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:39:06.078298 1000926 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:39:06.078390 1000926 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:39:06.233705 1000926 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:18:37 UTC, ends at Wed 2023-08-30 22:39:56 UTC. --
	Aug 30 22:18:39 minikube systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 30 22:18:39 minikube systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 30 22:18:44 embed-certs-208903 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 30 22:18:44 embed-certs-208903 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 30 22:19:51 embed-certs-208903 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 30 22:19:51 embed-certs-208903 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	
	* 
	* ==> container status <==
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug30 22:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072921] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.305428] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.387854] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153721] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.490379] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	
	* 
	* ==> kernel <==
	*  22:40:02 up 21 min,  0 users,  load average: 0.00, 0.00, 0.00
	Linux embed-certs-208903 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:18:37 UTC, ends at Wed 2023-08-30 22:40:02 UTC. --
	-- No entries --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:39:14.262636 1001514 logs.go:281] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:39:08Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:39:10Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:12Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:14Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:39:20.301376 1001514 logs.go:281] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:39:14Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:39:16Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:18Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:20Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:39:26.331006 1001514 logs.go:281] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:39:20Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:39:22Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:24Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:26Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:39:32.357931 1001514 logs.go:281] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:39:26Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:39:28Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:30Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:32Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:39:38.401230 1001514 logs.go:281] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:39:32Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:39:34Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:36Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:38Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:39:44.429578 1001514 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:39:38Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:39:40Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:42Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:44Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:39:50.456625 1001514 logs.go:281] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:39:44Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:39:46Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:48Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:50Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:39:56.496318 1001514 logs.go:281] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:39:50Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:39:52Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:54Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:39:56Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0830 22:40:02.591618 1001514 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-08-30T22:39:56Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2023-08-30T22:39:58Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:40:00Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2023-08-30T22:40:02Z" level=fatal msg="connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2023-08-30T22:39:56Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\ntime=\"2023-08-30T22:39:58Z\" level=error msg=\"connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded\"\ntime=\"2023-08-30T22:40:00Z\" level=error msg=\"connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded\"\ntime=\"2023-08-30T22:40:02Z\" level=fatal msg=\"connect: connect endpoint 'unix:///run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /st
derr **"
	E0830 22:40:02.707313 1001514 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0830 22:40:02.684633     755 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:40:02.685712     755 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:40:02.687675     755 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:40:02.689366     755 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	E0830 22:40:02.691106     755 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nE0830 22:40:02.684633     755 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:40:02.685712     755 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:40:02.687675     755 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:40:02.689366     755 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nE0830 22:40:02.691106     755 memcache.go:265] couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused\nThe connection to the s
erver localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208903 -n embed-certs-208903
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-208903 -n embed-certs-208903: exit status 2 (257.426034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "embed-certs-208903" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (560.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (423.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-08-30 22:40:49.181924429 +0000 UTC m=+5491.374674363
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-791007 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-791007 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.33µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-791007 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-791007 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-791007 logs -n 25: (1.141921426s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-051361 sudo systemctl                        | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | status kubelet --all --full                          |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo systemctl                        | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | cat kubelet --no-pager                               |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo journalctl                       | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo cat                              | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo cat                              | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo systemctl                        | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo systemctl                        | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo cat                              | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo docker                           | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo systemctl                        | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo systemctl                        | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo cat                              | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo cat                              | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo                                  | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo systemctl                        | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo systemctl                        | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo cat                              | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo cat                              | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo containerd                       | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo systemctl                        | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo systemctl                        | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo find                             | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-051361 sudo crio                             | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-051361                                       | auto-051361           | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC | 30 Aug 23 22:40 UTC |
	| start   | -p custom-flannel-051361                             | custom-flannel-051361 | jenkins | v1.31.2 | 30 Aug 23 22:40 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:40:42
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:40:42.192434 1004046 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:40:42.192612 1004046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:40:42.192625 1004046 out.go:309] Setting ErrFile to fd 2...
	I0830 22:40:42.192632 1004046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:40:42.192939 1004046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:40:42.193770 1004046 out.go:303] Setting JSON to false
	I0830 22:40:42.195336 1004046 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15789,"bootTime":1693419453,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:40:42.195423 1004046 start.go:138] virtualization: kvm guest
	I0830 22:40:42.198058 1004046 out.go:177] * [custom-flannel-051361] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:40:42.200000 1004046 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:40:42.200075 1004046 notify.go:220] Checking for updates...
	I0830 22:40:42.201473 1004046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:40:42.203258 1004046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:40:42.204859 1004046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:40:42.206396 1004046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:40:42.207861 1004046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:40:42.209888 1004046 config.go:182] Loaded profile config "calico-051361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:40:42.210081 1004046 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:40:42.210220 1004046 config.go:182] Loaded profile config "kindnet-051361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:40:42.210386 1004046 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:40:42.253906 1004046 out.go:177] * Using the kvm2 driver based on user configuration
	I0830 22:40:42.255570 1004046 start.go:298] selected driver: kvm2
	I0830 22:40:42.255594 1004046 start.go:902] validating driver "kvm2" against <nil>
	I0830 22:40:42.255610 1004046 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:40:42.256680 1004046 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:40:42.256772 1004046 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:40:42.275054 1004046 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:40:42.275123 1004046 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 22:40:42.275439 1004046 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 22:40:42.275502 1004046 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0830 22:40:42.275519 1004046 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0830 22:40:42.275541 1004046 start_flags.go:319] config:
	{Name:custom-flannel-051361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-051361 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:40:42.275766 1004046 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:40:42.277917 1004046 out.go:177] * Starting control plane node custom-flannel-051361 in cluster custom-flannel-051361
	I0830 22:40:40.378069 1002244 out.go:204]   - Booting up control plane ...
	I0830 22:40:40.378225 1002244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:40:40.378318 1002244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:40:40.378838 1002244 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:40:40.395674 1002244 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:40:40.398709 1002244 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:40:40.398978 1002244 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:40:40.550319 1002244 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:40:42.225708 1002447 main.go:141] libmachine: (calico-051361) DBG | domain calico-051361 has defined MAC address 52:54:00:0f:b7:35 in network mk-calico-051361
	I0830 22:40:42.226354 1002447 main.go:141] libmachine: (calico-051361) DBG | unable to find current IP address of domain calico-051361 in network mk-calico-051361
	I0830 22:40:42.226379 1002447 main.go:141] libmachine: (calico-051361) DBG | I0830 22:40:42.226233 1002716 retry.go:31] will retry after 2.379933472s: waiting for machine to come up
	I0830 22:40:44.607635 1002447 main.go:141] libmachine: (calico-051361) DBG | domain calico-051361 has defined MAC address 52:54:00:0f:b7:35 in network mk-calico-051361
	I0830 22:40:44.608126 1002447 main.go:141] libmachine: (calico-051361) DBG | unable to find current IP address of domain calico-051361 in network mk-calico-051361
	I0830 22:40:44.608154 1002447 main.go:141] libmachine: (calico-051361) DBG | I0830 22:40:44.608079 1002716 retry.go:31] will retry after 2.940827263s: waiting for machine to come up
	I0830 22:40:42.279545 1004046 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:40:42.279595 1004046 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0830 22:40:42.279603 1004046 cache.go:57] Caching tarball of preloaded images
	I0830 22:40:42.279683 1004046 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:40:42.279694 1004046 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 22:40:42.279833 1004046 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/custom-flannel-051361/config.json ...
	I0830 22:40:42.279857 1004046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/custom-flannel-051361/config.json: {Name:mk01b01c1b59bc9c8ddf88fd9f982e55ddf8b42c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:40:42.280027 1004046 start.go:365] acquiring machines lock for custom-flannel-051361: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:40:48.551881 1002244 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002894 seconds
	I0830 22:40:48.552028 1002244 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:40:48.575826 1002244 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:18:56 UTC, ends at Wed 2023-08-30 22:40:49 UTC. --
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.583477277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=47af7f06-6042-45e2-ba40-a9ea4799953e name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.697463211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=078a7a9a-cf99-4049-a6ec-74285daeb738 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.697530500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=078a7a9a-cf99-4049-a6ec-74285daeb738 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.697717629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=078a7a9a-cf99-4049-a6ec-74285daeb738 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.740349643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e4bcfde1-6b58-42d2-9d89-9419aa58293e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.740466125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e4bcfde1-6b58-42d2-9d89-9419aa58293e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.740653843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e4bcfde1-6b58-42d2-9d89-9419aa58293e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.783103039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5ba45b92-6e56-433d-8b13-25822e171bd5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.783173200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5ba45b92-6e56-433d-8b13-25822e171bd5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.783468490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5ba45b92-6e56-433d-8b13-25822e171bd5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.818038107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=170be8e9-7875-4049-b028-534df2f04e9b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.818103430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=170be8e9-7875-4049-b028-534df2f04e9b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.818252270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=170be8e9-7875-4049-b028-534df2f04e9b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.852360087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=91ff9b80-646f-48b8-a887-8b8f03672616 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.852429929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=91ff9b80-646f-48b8-a887-8b8f03672616 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.852578312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=91ff9b80-646f-48b8-a887-8b8f03672616 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.893510050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4db308c3-4e93-4a78-8cb0-b4a47b9095b9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.893572743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4db308c3-4e93-4a78-8cb0-b4a47b9095b9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.893734047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4db308c3-4e93-4a78-8cb0-b4a47b9095b9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.938868940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=92b6e0aa-13e3-4dea-8d51-ea3691aca27b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.938929591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=92b6e0aa-13e3-4dea-8d51-ea3691aca27b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.939120753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=92b6e0aa-13e3-4dea-8d51-ea3691aca27b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.972775817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2d584fa3-f223-4cb8-bbb1-c6bc67323060 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.972874689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2d584fa3-f223-4cb8-bbb1-c6bc67323060 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:40:49 default-k8s-diff-port-791007 crio[721]: time="2023-08-30 22:40:49.973136341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047,PodSandboxId:3f8191e0d2d119a363c220262389f6b42ed34edc57e35976d4995076a11ad735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434282182908083,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb41168e-19d2-4b57-a2fb-ab0b3d0ff836,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd64385,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856,PodSandboxId:1c20c28c707562384ffa4d6522e2f0ca1b113621182f55467161a7a33fad1926,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1693434281941206437,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbdvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd98a34a-f2f9-4e73-a751-e68a1addb89f,},Annotations:map[string]string{io.kubernetes.container.hash: af143632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db,PodSandboxId:a3a6163233a289ea81430e2e6e5f79cd02dab83429918cf0594a54b10fb02307,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1693434281282642077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jwn87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984f4b65-9261-4952-a368-5fac2fa14bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 94fcc1c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02,PodSandboxId:413acaa73944713987ae450c15a6b0a4a91e41bcaa69d178b1707502fb19bd48,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1693434257205606444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ba8db4d7d99fb8d7
abe6ba67dadb480,},Annotations:map[string]string{io.kubernetes.container.hash: 429cdf15,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df,PodSandboxId:e3e258fe8fa0fdae49ef1d7040dcc442e6aea5c1dffad8b1d18725bcd4595116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1693434257293058959,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 99fcf05bcab8afc51c97c0772eeb6a59,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f,PodSandboxId:36b343bc687e19139ab3381bcc76fc0d1241498d4636a5c093e79a841629cc6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1693434257179812095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a079395c9162847b9a330dbc46de23e4,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587,PodSandboxId:7641f3d5c0e64567a9cb6792792c6bc1f6d33c2e9485440b0e5b129fc7e5f120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1693434257045357887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-791007,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 51efe9c4dd41db71e0ba21bdab389ceb,},Annotations:map[string]string{io.kubernetes.container.hash: b3c42664,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2d584fa3-f223-4cb8-bbb1-c6bc67323060 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	bded3689c729f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   3f8191e0d2d11
	5b8928ed58904       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   16 minutes ago      Running             kube-proxy                0                   1c20c28c70756
	78554fafd9dc7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   a3a6163233a28
	a5597f1b16dd0       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   16 minutes ago      Running             kube-controller-manager   2                   e3e258fe8fa0f
	9d020424185d5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   413acaa739447
	7a575d95cbfee       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   16 minutes ago      Running             kube-scheduler            2                   36b343bc687e1
	0a27e2279b8df       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   16 minutes ago      Running             kube-apiserver            2                   7641f3d5c0e64
	
	* 
	* ==> coredns [78554fafd9dc75a5d29e458ea7fcc54e06c4f189af454686b97aa148e760a5db] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-791007
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-791007
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=default-k8s-diff-port-791007
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T22_24_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:24:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-791007
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 22:40:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:40:04 +0000   Wed, 30 Aug 2023 22:24:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:40:04 +0000   Wed, 30 Aug 2023 22:24:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:40:04 +0000   Wed, 30 Aug 2023 22:24:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:40:04 +0000   Wed, 30 Aug 2023 22:24:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.104
	  Hostname:    default-k8s-diff-port-791007
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 27c45b64d2c140e0acc60d76ccf1ce71
	  System UUID:                27c45b64-d2c1-40e0-acc6-0d76ccf1ce71
	  Boot ID:                    ab5f50e2-016c-4e34-9579-6ff6f84608a5
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jwn87                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-791007                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-791007             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-791007    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-bbdvk                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-791007             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-dllmg                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-791007 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m                kubelet          Node default-k8s-diff-port-791007 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m                kubelet          Node default-k8s-diff-port-791007 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-791007 event: Registered Node default-k8s-diff-port-791007 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug30 22:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072083] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.308731] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.509461] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150959] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.440847] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug30 22:19] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.113319] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.149093] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.112613] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.223553] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[ +17.199617] systemd-fstab-generator[922]: Ignoring "noauto" for root device
	[ +21.577474] kauditd_printk_skb: 29 callbacks suppressed
	[Aug30 22:24] systemd-fstab-generator[3528]: Ignoring "noauto" for root device
	[  +9.284315] systemd-fstab-generator[3853]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [9d020424185d500c6ed61d1c46e8958fa8f0792c30bc030f5173baa0b4a92f02] <==
	* {"level":"info","ts":"2023-08-30T22:24:19.114406Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:34:19.190609Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2023-08-30T22:34:19.193366Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":715,"took":"2.294138ms","hash":2199048891}
	{"level":"info","ts":"2023-08-30T22:34:19.193452Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2199048891,"revision":715,"compact-revision":-1}
	{"level":"warn","ts":"2023-08-30T22:38:23.49387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.334858ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1130274075686849284 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.104\" mod_revision:1149 > success:<request_put:<key:\"/registry/masterleases/192.168.61.104\" value_size:67 lease:1130274075686849282 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.104\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-08-30T22:38:23.494586Z","caller":"traceutil/trace.go:171","msg":"trace[437183512] linearizableReadLoop","detail":"{readStateIndex:1339; appliedIndex:1338; }","duration":"203.85008ms","start":"2023-08-30T22:38:23.290699Z","end":"2023-08-30T22:38:23.494549Z","steps":["trace[437183512] 'read index received'  (duration: 64.69145ms)","trace[437183512] 'applied index is now lower than readState.Index'  (duration: 139.157114ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-30T22:38:23.494641Z","caller":"traceutil/trace.go:171","msg":"trace[1809768160] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"268.756095ms","start":"2023-08-30T22:38:23.225864Z","end":"2023-08-30T22:38:23.49462Z","steps":["trace[1809768160] 'process raft request'  (duration: 129.582295ms)","trace[1809768160] 'compare'  (duration: 137.145911ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T22:38:23.494894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.247617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2023-08-30T22:38:23.494963Z","caller":"traceutil/trace.go:171","msg":"trace[708534482] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1157; }","duration":"204.322308ms","start":"2023-08-30T22:38:23.290629Z","end":"2023-08-30T22:38:23.494951Z","steps":["trace[708534482] 'agreement among raft nodes before linearized reading'  (duration: 204.154443ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:38:23.770006Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.471647ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1130274075686849290 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1155 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-08-30T22:38:23.770544Z","caller":"traceutil/trace.go:171","msg":"trace[1186921753] linearizableReadLoop","detail":"{readStateIndex:1340; appliedIndex:1339; }","duration":"168.98309ms","start":"2023-08-30T22:38:23.601549Z","end":"2023-08-30T22:38:23.770533Z","steps":["trace[1186921753] 'read index received'  (duration: 33.886496ms)","trace[1186921753] 'applied index is now lower than readState.Index'  (duration: 135.095128ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T22:38:23.770686Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.17188ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-30T22:38:23.770731Z","caller":"traceutil/trace.go:171","msg":"trace[1980188318] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1158; }","duration":"169.224483ms","start":"2023-08-30T22:38:23.601499Z","end":"2023-08-30T22:38:23.770724Z","steps":["trace[1980188318] 'agreement among raft nodes before linearized reading'  (duration: 169.150096ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T22:38:23.770886Z","caller":"traceutil/trace.go:171","msg":"trace[254881240] transaction","detail":"{read_only:false; response_revision:1158; number_of_response:1; }","duration":"269.146068ms","start":"2023-08-30T22:38:23.501733Z","end":"2023-08-30T22:38:23.770879Z","steps":["trace[254881240] 'process raft request'  (duration: 133.745398ms)","trace[254881240] 'compare'  (duration: 134.341783ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T22:38:24.063377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.640833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-30T22:38:24.063463Z","caller":"traceutil/trace.go:171","msg":"trace[807716595] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1158; }","duration":"111.735579ms","start":"2023-08-30T22:38:23.951713Z","end":"2023-08-30T22:38:24.063448Z","steps":["trace[807716595] 'range keys from in-memory index tree'  (duration: 111.48617ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T22:39:03.323765Z","caller":"traceutil/trace.go:171","msg":"trace[6693225] transaction","detail":"{read_only:false; response_revision:1189; number_of_response:1; }","duration":"156.190823ms","start":"2023-08-30T22:39:03.167544Z","end":"2023-08-30T22:39:03.323735Z","steps":["trace[6693225] 'process raft request'  (duration: 127.62788ms)","trace[6693225] 'compare'  (duration: 28.377417ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-30T22:39:19.199253Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":958}
	{"level":"info","ts":"2023-08-30T22:39:19.200991Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":958,"took":"1.405997ms","hash":885702587}
	{"level":"info","ts":"2023-08-30T22:39:19.201048Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":885702587,"revision":958,"compact-revision":715}
	{"level":"info","ts":"2023-08-30T22:39:36.730917Z","caller":"traceutil/trace.go:171","msg":"trace[1162682834] linearizableReadLoop","detail":"{readStateIndex:1413; appliedIndex:1412; }","duration":"128.952243ms","start":"2023-08-30T22:39:36.601946Z","end":"2023-08-30T22:39:36.730898Z","steps":["trace[1162682834] 'read index received'  (duration: 128.755029ms)","trace[1162682834] 'applied index is now lower than readState.Index'  (duration: 196.575µs)"],"step_count":2}
	{"level":"warn","ts":"2023-08-30T22:39:36.73105Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.102398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-30T22:39:36.731077Z","caller":"traceutil/trace.go:171","msg":"trace[1430712248] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1216; }","duration":"129.146435ms","start":"2023-08-30T22:39:36.601921Z","end":"2023-08-30T22:39:36.731067Z","steps":["trace[1430712248] 'agreement among raft nodes before linearized reading'  (duration: 129.073253ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T22:39:36.731397Z","caller":"traceutil/trace.go:171","msg":"trace[1591430604] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"449.510753ms","start":"2023-08-30T22:39:36.281866Z","end":"2023-08-30T22:39:36.731377Z","steps":["trace[1591430604] 'process raft request'  (duration: 448.888902ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:39:36.731562Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:39:36.281852Z","time spent":"449.646082ms","remote":"127.0.0.1:44320","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1215 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	* 
	* ==> kernel <==
	*  22:40:50 up 22 min,  0 users,  load average: 0.05, 0.12, 0.15
	Linux default-k8s-diff-port-791007 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0a27e2279b8df02f2f4dc1fb3d54b8e193e918b281de284d7c86a90c497d8587] <==
	* I0830 22:38:21.286911       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.106.253:443: connect: connection refused
	I0830 22:38:21.287094       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0830 22:39:21.287665       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.106.253:443: connect: connection refused
	I0830 22:39:21.287746       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0830 22:39:21.369932       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:39:21.370126       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:39:21.370844       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.106.253:443: connect: connection refused
	I0830 22:39:21.370859       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0830 22:39:22.370765       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:39:22.370870       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0830 22:39:22.370884       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0830 22:39:22.371122       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:39:22.371191       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:39:22.372484       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:40:21.287617       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.106.253:443: connect: connection refused
	I0830 22:40:21.287680       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0830 22:40:22.371853       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:40:22.371909       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0830 22:40:22.371918       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0830 22:40:22.373150       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:40:22.373238       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:40:22.373251       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [a5597f1b16dd0dd4ca531d83f78a8e86223b48c7c0249a26ea8c34380d3891df] <==
	* I0830 22:35:08.356909       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:35:37.833580       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:35:38.369956       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0830 22:35:44.089063       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="161.421µs"
	I0830 22:35:55.093352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="173.896µs"
	E0830 22:36:07.839693       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:36:08.378934       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:36:37.847659       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:36:38.391244       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:37:07.853709       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:37:08.403918       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:37:37.859852       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:37:38.413657       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:38:07.866769       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:38:08.423453       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:38:37.876091       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:38:38.438521       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:39:07.883044       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:39:08.448784       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:39:37.888978       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:39:38.459201       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:40:07.894953       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:40:08.467698       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:40:37.904156       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:40:38.481502       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [5b8928ed58904846e2aa02a09a3922f7980c29b5531c07e251e7e16b2a6d9856] <==
	* I0830 22:24:42.376912       1 server_others.go:69] "Using iptables proxy"
	I0830 22:24:42.419654       1 node.go:141] Successfully retrieved node IP: 192.168.61.104
	I0830 22:24:42.509410       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0830 22:24:42.509458       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0830 22:24:42.512888       1 server_others.go:152] "Using iptables Proxier"
	I0830 22:24:42.512983       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 22:24:42.513347       1 server.go:846] "Version info" version="v1.28.1"
	I0830 22:24:42.513384       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:24:42.514625       1 config.go:188] "Starting service config controller"
	I0830 22:24:42.514668       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 22:24:42.514687       1 config.go:97] "Starting endpoint slice config controller"
	I0830 22:24:42.514690       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 22:24:42.515258       1 config.go:315] "Starting node config controller"
	I0830 22:24:42.515368       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 22:24:42.614797       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 22:24:42.614833       1 shared_informer.go:318] Caches are synced for service config
	I0830 22:24:42.616390       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7a575d95cbfee930916ff2791381c6756176923852b5ff1dffb18a98dd93997f] <==
	* E0830 22:24:21.443141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 22:24:21.443147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0830 22:24:21.443361       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0830 22:24:21.443493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0830 22:24:22.261922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 22:24:22.261994       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0830 22:24:22.294543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 22:24:22.294597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0830 22:24:22.298470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 22:24:22.298522       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0830 22:24:22.375716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 22:24:22.375773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0830 22:24:22.390957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 22:24:22.391012       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0830 22:24:22.397585       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0830 22:24:22.397639       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0830 22:24:22.519700       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0830 22:24:22.519755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0830 22:24:22.578684       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 22:24:22.578746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0830 22:24:22.683741       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 22:24:22.683801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0830 22:24:22.908921       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 22:24:22.909001       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0830 22:24:25.803659       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:18:56 UTC, ends at Wed 2023-08-30 22:40:50 UTC. --
	Aug 30 22:38:25 default-k8s-diff-port-791007 kubelet[3860]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:38:25 default-k8s-diff-port-791007 kubelet[3860]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:38:25 default-k8s-diff-port-791007 kubelet[3860]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:38:39 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:38:39.072511    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:38:54 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:38:54.071750    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:39:07 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:39:07.071727    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:39:22 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:39:22.070984    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:39:25 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:39:25.197064    3860 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:39:25 default-k8s-diff-port-791007 kubelet[3860]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:39:25 default-k8s-diff-port-791007 kubelet[3860]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:39:25 default-k8s-diff-port-791007 kubelet[3860]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:39:25 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:39:25.262621    3860 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Aug 30 22:39:37 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:39:37.071428    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:39:52 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:39:52.071929    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:40:06 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:40:06.071939    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:40:20 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:40:20.070779    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:40:25 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:40:25.199460    3860 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:40:25 default-k8s-diff-port-791007 kubelet[3860]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:40:25 default-k8s-diff-port-791007 kubelet[3860]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:40:25 default-k8s-diff-port-791007 kubelet[3860]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:40:31 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:40:31.073842    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	Aug 30 22:40:44 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:40:44.092101    3860 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 30 22:40:44 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:40:44.092164    3860 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 30 22:40:44 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:40:44.092540    3860 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2b46r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-dllmg_kube-system(6826d918-a2ac-4744-8145-f6d7599499af): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 30 22:40:44 default-k8s-diff-port-791007 kubelet[3860]: E0830 22:40:44.092605    3860 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-dllmg" podUID="6826d918-a2ac-4744-8145-f6d7599499af"
	
	* 
	* ==> storage-provisioner [bded3689c729f0c787ddc1826fd9eeb8a3de167c59cfe82758ce6830d906b047] <==
	* I0830 22:24:42.427407       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 22:24:42.445101       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 22:24:42.445221       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 22:24:42.467974       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 22:24:42.469679       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-791007_fcee0bc5-5789-4dbf-99e4-500d5de68deb!
	I0830 22:24:42.469181       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a33a8bc6-bc66-4005-a6d7-a2d3f8629ead", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-791007_fcee0bc5-5789-4dbf-99e4-500d5de68deb became leader
	I0830 22:24:42.571413       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-791007_fcee0bc5-5789-4dbf-99e4-500d5de68deb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-791007 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-dllmg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-791007 describe pod metrics-server-57f55c9bc5-dllmg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-791007 describe pod metrics-server-57f55c9bc5-dllmg: exit status 1 (83.393687ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-dllmg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-791007 describe pod metrics-server-57f55c9bc5-dllmg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (423.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (267.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-698195 -n no-preload-698195
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-08-30 22:38:22.999334904 +0000 UTC m=+5345.192084828
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-698195 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-698195 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.85µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-698195 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-698195 -n no-preload-698195
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-698195 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-698195 logs -n 25: (2.434079817s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-184733                              | stopped-upgrade-184733       | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:09 UTC |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	| delete  | -p                                                     | disable-driver-mounts-883991 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | disable-driver-mounts-883991                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:12 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-698195             | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208903            | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-791007  | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC | 30 Aug 23 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC |                     |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-698195                  | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208903                 | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC | 30 Aug 23 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-250163        | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC | 30 Aug 23 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-791007       | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC | 30 Aug 23 22:24 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-250163             | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC | 30 Aug 23 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:37 UTC | 30 Aug 23 22:37 UTC |
	| start   | -p newest-cni-618803 --memory=2200 --alsologtostderr   | newest-cni-618803            | jenkins | v1.31.2 | 30 Aug 23 22:37 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:37:48
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:37:48.735957 1000447 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:37:48.736088 1000447 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:37:48.736097 1000447 out.go:309] Setting ErrFile to fd 2...
	I0830 22:37:48.736101 1000447 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:37:48.736336 1000447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:37:48.736993 1000447 out.go:303] Setting JSON to false
	I0830 22:37:48.738012 1000447 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":15616,"bootTime":1693419453,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:37:48.738073 1000447 start.go:138] virtualization: kvm guest
	I0830 22:37:48.740980 1000447 out.go:177] * [newest-cni-618803] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:37:48.742910 1000447 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:37:48.744336 1000447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:37:48.742952 1000447 notify.go:220] Checking for updates...
	I0830 22:37:48.745793 1000447 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:37:48.747259 1000447 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:37:48.748726 1000447 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:37:48.750189 1000447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:37:48.751947 1000447 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:37:48.752078 1000447 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:37:48.752177 1000447 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:37:48.752294 1000447 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:37:48.788882 1000447 out.go:177] * Using the kvm2 driver based on user configuration
	I0830 22:37:48.790159 1000447 start.go:298] selected driver: kvm2
	I0830 22:37:48.790187 1000447 start.go:902] validating driver "kvm2" against <nil>
	I0830 22:37:48.790201 1000447 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:37:48.791020 1000447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:37:48.791112 1000447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:37:48.807299 1000447 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:37:48.807348 1000447 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0830 22:37:48.807399 1000447 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0830 22:37:48.807704 1000447 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0830 22:37:48.807743 1000447 cni.go:84] Creating CNI manager for ""
	I0830 22:37:48.807753 1000447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:37:48.807759 1000447 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0830 22:37:48.807812 1000447 start_flags.go:319] config:
	{Name:newest-cni-618803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-618803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0}
	I0830 22:37:48.807989 1000447 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:37:48.810231 1000447 out.go:177] * Starting control plane node newest-cni-618803 in cluster newest-cni-618803
	I0830 22:37:48.811543 1000447 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:37:48.811613 1000447 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0830 22:37:48.811638 1000447 cache.go:57] Caching tarball of preloaded images
	I0830 22:37:48.811719 1000447 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:37:48.811733 1000447 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 22:37:48.811877 1000447 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/config.json ...
	I0830 22:37:48.811900 1000447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/config.json: {Name:mk0d08d267644c6469e67b457090180282624e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:37:48.812087 1000447 start.go:365] acquiring machines lock for newest-cni-618803: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:37:48.812123 1000447 start.go:369] acquired machines lock for "newest-cni-618803" in 19.753µs
	I0830 22:37:48.812146 1000447 start.go:93] Provisioning new machine with config: &{Name:newest-cni-618803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-618803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:37:48.812242 1000447 start.go:125] createHost starting for "" (driver="kvm2")
	I0830 22:37:48.814122 1000447 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0830 22:37:48.814267 1000447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:37:48.814319 1000447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:37:48.828884 1000447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0830 22:37:48.829378 1000447 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:37:48.830074 1000447 main.go:141] libmachine: Using API Version  1
	I0830 22:37:48.830097 1000447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:37:48.830418 1000447 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:37:48.830619 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetMachineName
	I0830 22:37:48.830766 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:37:48.830928 1000447 start.go:159] libmachine.API.Create for "newest-cni-618803" (driver="kvm2")
	I0830 22:37:48.830965 1000447 client.go:168] LocalClient.Create starting
	I0830 22:37:48.831000 1000447 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem
	I0830 22:37:48.831046 1000447 main.go:141] libmachine: Decoding PEM data...
	I0830 22:37:48.831061 1000447 main.go:141] libmachine: Parsing certificate...
	I0830 22:37:48.831110 1000447 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem
	I0830 22:37:48.831128 1000447 main.go:141] libmachine: Decoding PEM data...
	I0830 22:37:48.831139 1000447 main.go:141] libmachine: Parsing certificate...
	I0830 22:37:48.831166 1000447 main.go:141] libmachine: Running pre-create checks...
	I0830 22:37:48.831175 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .PreCreateCheck
	I0830 22:37:48.831583 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetConfigRaw
	I0830 22:37:48.832011 1000447 main.go:141] libmachine: Creating machine...
	I0830 22:37:48.832026 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .Create
	I0830 22:37:48.832156 1000447 main.go:141] libmachine: (newest-cni-618803) Creating KVM machine...
	I0830 22:37:48.833371 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found existing default KVM network
	I0830 22:37:48.835112 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:48.834962 1000469 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d7c0}
	I0830 22:37:48.840549 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | trying to create private KVM network mk-newest-cni-618803 192.168.39.0/24...
	I0830 22:37:48.913909 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | private KVM network mk-newest-cni-618803 192.168.39.0/24 created
	I0830 22:37:48.913950 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:48.913862 1000469 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:37:48.913968 1000447 main.go:141] libmachine: (newest-cni-618803) Setting up store path in /home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803 ...
	I0830 22:37:48.913990 1000447 main.go:141] libmachine: (newest-cni-618803) Building disk image from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 22:37:48.914009 1000447 main.go:141] libmachine: (newest-cni-618803) Downloading /home/jenkins/minikube-integration/17114-955377/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0830 22:37:49.166581 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:49.166440 1000469 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803/id_rsa...
	I0830 22:37:49.451412 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:49.451251 1000469 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803/newest-cni-618803.rawdisk...
	I0830 22:37:49.451446 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Writing magic tar header
	I0830 22:37:49.451460 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Writing SSH key tar header
	I0830 22:37:49.451556 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:49.451476 1000469 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803 ...
	I0830 22:37:49.451663 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803
	I0830 22:37:49.451691 1000447 main.go:141] libmachine: (newest-cni-618803) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803 (perms=drwx------)
	I0830 22:37:49.451703 1000447 main.go:141] libmachine: (newest-cni-618803) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube/machines (perms=drwxr-xr-x)
	I0830 22:37:49.451714 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube/machines
	I0830 22:37:49.451724 1000447 main.go:141] libmachine: (newest-cni-618803) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377/.minikube (perms=drwxr-xr-x)
	I0830 22:37:49.451739 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:37:49.451751 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17114-955377
	I0830 22:37:49.451783 1000447 main.go:141] libmachine: (newest-cni-618803) Setting executable bit set on /home/jenkins/minikube-integration/17114-955377 (perms=drwxrwxr-x)
	I0830 22:37:49.451802 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0830 22:37:49.451817 1000447 main.go:141] libmachine: (newest-cni-618803) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0830 22:37:49.451837 1000447 main.go:141] libmachine: (newest-cni-618803) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0830 22:37:49.451851 1000447 main.go:141] libmachine: (newest-cni-618803) Creating domain...
	I0830 22:37:49.451883 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Checking permissions on dir: /home/jenkins
	I0830 22:37:49.451901 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Checking permissions on dir: /home
	I0830 22:37:49.451916 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Skipping /home - not owner
	I0830 22:37:49.453029 1000447 main.go:141] libmachine: (newest-cni-618803) define libvirt domain using xml: 
	I0830 22:37:49.453065 1000447 main.go:141] libmachine: (newest-cni-618803) <domain type='kvm'>
	I0830 22:37:49.453076 1000447 main.go:141] libmachine: (newest-cni-618803)   <name>newest-cni-618803</name>
	I0830 22:37:49.453082 1000447 main.go:141] libmachine: (newest-cni-618803)   <memory unit='MiB'>2200</memory>
	I0830 22:37:49.453088 1000447 main.go:141] libmachine: (newest-cni-618803)   <vcpu>2</vcpu>
	I0830 22:37:49.453094 1000447 main.go:141] libmachine: (newest-cni-618803)   <features>
	I0830 22:37:49.453103 1000447 main.go:141] libmachine: (newest-cni-618803)     <acpi/>
	I0830 22:37:49.453108 1000447 main.go:141] libmachine: (newest-cni-618803)     <apic/>
	I0830 22:37:49.453116 1000447 main.go:141] libmachine: (newest-cni-618803)     <pae/>
	I0830 22:37:49.453124 1000447 main.go:141] libmachine: (newest-cni-618803)     
	I0830 22:37:49.453132 1000447 main.go:141] libmachine: (newest-cni-618803)   </features>
	I0830 22:37:49.453138 1000447 main.go:141] libmachine: (newest-cni-618803)   <cpu mode='host-passthrough'>
	I0830 22:37:49.453144 1000447 main.go:141] libmachine: (newest-cni-618803)   
	I0830 22:37:49.453149 1000447 main.go:141] libmachine: (newest-cni-618803)   </cpu>
	I0830 22:37:49.453176 1000447 main.go:141] libmachine: (newest-cni-618803)   <os>
	I0830 22:37:49.453201 1000447 main.go:141] libmachine: (newest-cni-618803)     <type>hvm</type>
	I0830 22:37:49.453213 1000447 main.go:141] libmachine: (newest-cni-618803)     <boot dev='cdrom'/>
	I0830 22:37:49.453226 1000447 main.go:141] libmachine: (newest-cni-618803)     <boot dev='hd'/>
	I0830 22:37:49.453241 1000447 main.go:141] libmachine: (newest-cni-618803)     <bootmenu enable='no'/>
	I0830 22:37:49.453260 1000447 main.go:141] libmachine: (newest-cni-618803)   </os>
	I0830 22:37:49.453326 1000447 main.go:141] libmachine: (newest-cni-618803)   <devices>
	I0830 22:37:49.453358 1000447 main.go:141] libmachine: (newest-cni-618803)     <disk type='file' device='cdrom'>
	I0830 22:37:49.453382 1000447 main.go:141] libmachine: (newest-cni-618803)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803/boot2docker.iso'/>
	I0830 22:37:49.453401 1000447 main.go:141] libmachine: (newest-cni-618803)       <target dev='hdc' bus='scsi'/>
	I0830 22:37:49.453414 1000447 main.go:141] libmachine: (newest-cni-618803)       <readonly/>
	I0830 22:37:49.453426 1000447 main.go:141] libmachine: (newest-cni-618803)     </disk>
	I0830 22:37:49.453438 1000447 main.go:141] libmachine: (newest-cni-618803)     <disk type='file' device='disk'>
	I0830 22:37:49.453447 1000447 main.go:141] libmachine: (newest-cni-618803)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0830 22:37:49.453459 1000447 main.go:141] libmachine: (newest-cni-618803)       <source file='/home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803/newest-cni-618803.rawdisk'/>
	I0830 22:37:49.453466 1000447 main.go:141] libmachine: (newest-cni-618803)       <target dev='hda' bus='virtio'/>
	I0830 22:37:49.453487 1000447 main.go:141] libmachine: (newest-cni-618803)     </disk>
	I0830 22:37:49.453503 1000447 main.go:141] libmachine: (newest-cni-618803)     <interface type='network'>
	I0830 22:37:49.453527 1000447 main.go:141] libmachine: (newest-cni-618803)       <source network='mk-newest-cni-618803'/>
	I0830 22:37:49.453554 1000447 main.go:141] libmachine: (newest-cni-618803)       <model type='virtio'/>
	I0830 22:37:49.453568 1000447 main.go:141] libmachine: (newest-cni-618803)     </interface>
	I0830 22:37:49.453580 1000447 main.go:141] libmachine: (newest-cni-618803)     <interface type='network'>
	I0830 22:37:49.453600 1000447 main.go:141] libmachine: (newest-cni-618803)       <source network='default'/>
	I0830 22:37:49.453617 1000447 main.go:141] libmachine: (newest-cni-618803)       <model type='virtio'/>
	I0830 22:37:49.453627 1000447 main.go:141] libmachine: (newest-cni-618803)     </interface>
	I0830 22:37:49.453635 1000447 main.go:141] libmachine: (newest-cni-618803)     <serial type='pty'>
	I0830 22:37:49.453650 1000447 main.go:141] libmachine: (newest-cni-618803)       <target port='0'/>
	I0830 22:37:49.453662 1000447 main.go:141] libmachine: (newest-cni-618803)     </serial>
	I0830 22:37:49.453685 1000447 main.go:141] libmachine: (newest-cni-618803)     <console type='pty'>
	I0830 22:37:49.453702 1000447 main.go:141] libmachine: (newest-cni-618803)       <target type='serial' port='0'/>
	I0830 22:37:49.453712 1000447 main.go:141] libmachine: (newest-cni-618803)     </console>
	I0830 22:37:49.453724 1000447 main.go:141] libmachine: (newest-cni-618803)     <rng model='virtio'>
	I0830 22:37:49.453742 1000447 main.go:141] libmachine: (newest-cni-618803)       <backend model='random'>/dev/random</backend>
	I0830 22:37:49.453754 1000447 main.go:141] libmachine: (newest-cni-618803)     </rng>
	I0830 22:37:49.453775 1000447 main.go:141] libmachine: (newest-cni-618803)     
	I0830 22:37:49.453793 1000447 main.go:141] libmachine: (newest-cni-618803)     
	I0830 22:37:49.453814 1000447 main.go:141] libmachine: (newest-cni-618803)   </devices>
	I0830 22:37:49.453826 1000447 main.go:141] libmachine: (newest-cni-618803) </domain>
	I0830 22:37:49.453844 1000447 main.go:141] libmachine: (newest-cni-618803) 
	I0830 22:37:49.458375 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:a3:4b:40 in network default
	I0830 22:37:49.459008 1000447 main.go:141] libmachine: (newest-cni-618803) Ensuring networks are active...
	I0830 22:37:49.459082 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:49.459807 1000447 main.go:141] libmachine: (newest-cni-618803) Ensuring network default is active
	I0830 22:37:49.460206 1000447 main.go:141] libmachine: (newest-cni-618803) Ensuring network mk-newest-cni-618803 is active
	I0830 22:37:49.460893 1000447 main.go:141] libmachine: (newest-cni-618803) Getting domain xml...
	I0830 22:37:49.461614 1000447 main.go:141] libmachine: (newest-cni-618803) Creating domain...
	I0830 22:37:50.721775 1000447 main.go:141] libmachine: (newest-cni-618803) Waiting to get IP...
	I0830 22:37:50.722479 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:50.722836 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:50.722924 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:50.722830 1000469 retry.go:31] will retry after 305.017142ms: waiting for machine to come up
	I0830 22:37:51.029559 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:51.030053 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:51.030086 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:51.030009 1000469 retry.go:31] will retry after 305.028495ms: waiting for machine to come up
	I0830 22:37:51.336462 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:51.336907 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:51.336941 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:51.336859 1000469 retry.go:31] will retry after 473.199449ms: waiting for machine to come up
	I0830 22:37:51.811607 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:51.812085 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:51.812145 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:51.812067 1000469 retry.go:31] will retry after 498.466415ms: waiting for machine to come up
	I0830 22:37:52.311741 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:52.312312 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:52.312345 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:52.312249 1000469 retry.go:31] will retry after 704.94063ms: waiting for machine to come up
	I0830 22:37:53.019381 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:53.019843 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:53.019874 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:53.019788 1000469 retry.go:31] will retry after 775.473055ms: waiting for machine to come up
	I0830 22:37:53.796726 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:53.797199 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:53.797243 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:53.797157 1000469 retry.go:31] will retry after 935.819857ms: waiting for machine to come up
	I0830 22:37:54.734702 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:54.735171 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:54.735198 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:54.735128 1000469 retry.go:31] will retry after 1.147975779s: waiting for machine to come up
	I0830 22:37:55.884313 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:55.884722 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:55.884755 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:55.884662 1000469 retry.go:31] will retry after 1.344139679s: waiting for machine to come up
	I0830 22:37:57.231002 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:57.231537 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:57.231570 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:57.231511 1000469 retry.go:31] will retry after 2.055415241s: waiting for machine to come up
	I0830 22:37:59.288987 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:37:59.289515 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:37:59.289550 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:37:59.289491 1000469 retry.go:31] will retry after 2.450911825s: waiting for machine to come up
	I0830 22:38:01.742152 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:01.742605 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:38:01.742639 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:38:01.742514 1000469 retry.go:31] will retry after 3.204804659s: waiting for machine to come up
	I0830 22:38:04.949034 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:04.949533 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:38:04.949568 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:38:04.949504 1000469 retry.go:31] will retry after 4.413159923s: waiting for machine to come up
	I0830 22:38:09.363881 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:09.364289 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find current IP address of domain newest-cni-618803 in network mk-newest-cni-618803
	I0830 22:38:09.364323 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | I0830 22:38:09.364235 1000469 retry.go:31] will retry after 4.864930338s: waiting for machine to come up
	I0830 22:38:14.230308 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.230764 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has current primary IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.230802 1000447 main.go:141] libmachine: (newest-cni-618803) Found IP for machine: 192.168.39.211
	I0830 22:38:14.230812 1000447 main.go:141] libmachine: (newest-cni-618803) Reserving static IP address...
	I0830 22:38:14.231246 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | unable to find host DHCP lease matching {name: "newest-cni-618803", mac: "52:54:00:68:c3:8e", ip: "192.168.39.211"} in network mk-newest-cni-618803
	I0830 22:38:14.308600 1000447 main.go:141] libmachine: (newest-cni-618803) Reserved static IP address: 192.168.39.211
	I0830 22:38:14.308633 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Getting to WaitForSSH function...
	I0830 22:38:14.308644 1000447 main.go:141] libmachine: (newest-cni-618803) Waiting for SSH to be available...
	I0830 22:38:14.311003 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.311413 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:14.311453 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.311578 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Using SSH client type: external
	I0830 22:38:14.311608 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803/id_rsa (-rw-------)
	I0830 22:38:14.311641 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:38:14.311670 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | About to run SSH command:
	I0830 22:38:14.311683 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | exit 0
	I0830 22:38:14.411699 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | SSH cmd err, output: <nil>: 
	I0830 22:38:14.411975 1000447 main.go:141] libmachine: (newest-cni-618803) KVM machine creation complete!
	I0830 22:38:14.412309 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetConfigRaw
	I0830 22:38:14.412886 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:38:14.413077 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:38:14.413255 1000447 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0830 22:38:14.413276 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetState
	I0830 22:38:14.414512 1000447 main.go:141] libmachine: Detecting operating system of created instance...
	I0830 22:38:14.414557 1000447 main.go:141] libmachine: Waiting for SSH to be available...
	I0830 22:38:14.414576 1000447 main.go:141] libmachine: Getting to WaitForSSH function...
	I0830 22:38:14.414610 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:14.417026 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.417419 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:14.417457 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.417582 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHPort
	I0830 22:38:14.417748 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:14.417903 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:14.418050 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHUsername
	I0830 22:38:14.418218 1000447 main.go:141] libmachine: Using SSH client type: native
	I0830 22:38:14.418731 1000447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0830 22:38:14.418748 1000447 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0830 22:38:14.543109 1000447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:38:14.543134 1000447 main.go:141] libmachine: Detecting the provisioner...
	I0830 22:38:14.543143 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:14.546111 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.546487 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:14.546519 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.546721 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHPort
	I0830 22:38:14.546945 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:14.547102 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:14.547242 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHUsername
	I0830 22:38:14.547458 1000447 main.go:141] libmachine: Using SSH client type: native
	I0830 22:38:14.547915 1000447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0830 22:38:14.547930 1000447 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0830 22:38:14.672555 1000447 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0830 22:38:14.672736 1000447 main.go:141] libmachine: found compatible host: buildroot
	I0830 22:38:14.672756 1000447 main.go:141] libmachine: Provisioning with buildroot...
	I0830 22:38:14.672768 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetMachineName
	I0830 22:38:14.673040 1000447 buildroot.go:166] provisioning hostname "newest-cni-618803"
	I0830 22:38:14.673083 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetMachineName
	I0830 22:38:14.673300 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:14.676275 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.676701 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:14.676732 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.676917 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHPort
	I0830 22:38:14.677146 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:14.677373 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:14.677539 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHUsername
	I0830 22:38:14.677760 1000447 main.go:141] libmachine: Using SSH client type: native
	I0830 22:38:14.678347 1000447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0830 22:38:14.678370 1000447 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-618803 && echo "newest-cni-618803" | sudo tee /etc/hostname
	I0830 22:38:14.818556 1000447 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-618803
	
	I0830 22:38:14.818598 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:14.821872 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.822303 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:14.822371 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.822525 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHPort
	I0830 22:38:14.822718 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:14.822896 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:14.823018 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHUsername
	I0830 22:38:14.823190 1000447 main.go:141] libmachine: Using SSH client type: native
	I0830 22:38:14.823588 1000447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0830 22:38:14.823604 1000447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-618803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-618803/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-618803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:38:14.956459 1000447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:38:14.956501 1000447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:38:14.956557 1000447 buildroot.go:174] setting up certificates
	I0830 22:38:14.956573 1000447 provision.go:83] configureAuth start
	I0830 22:38:14.956589 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetMachineName
	I0830 22:38:14.956925 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetIP
	I0830 22:38:14.960028 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.960388 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:14.960436 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.960640 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:14.963260 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.963652 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:14.963703 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:14.963872 1000447 provision.go:138] copyHostCerts
	I0830 22:38:14.963938 1000447 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:38:14.963957 1000447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:38:14.964032 1000447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:38:14.964157 1000447 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:38:14.964169 1000447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:38:14.964211 1000447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:38:14.964300 1000447 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:38:14.964313 1000447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:38:14.964343 1000447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:38:14.964425 1000447 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.newest-cni-618803 san=[192.168.39.211 192.168.39.211 localhost 127.0.0.1 minikube newest-cni-618803]
	I0830 22:38:15.077238 1000447 provision.go:172] copyRemoteCerts
	I0830 22:38:15.077316 1000447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:38:15.077351 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:15.080575 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.080935 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:15.080971 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.081149 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHPort
	I0830 22:38:15.081360 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:15.081520 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHUsername
	I0830 22:38:15.081645 1000447 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803/id_rsa Username:docker}
	I0830 22:38:15.175213 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:38:15.199824 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:38:15.222801 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:38:15.245608 1000447 provision.go:86] duration metric: configureAuth took 289.019255ms
	I0830 22:38:15.245635 1000447 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:38:15.245866 1000447 config.go:182] Loaded profile config "newest-cni-618803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:38:15.245963 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:15.249096 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.249469 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:15.249498 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.249687 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHPort
	I0830 22:38:15.249869 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:15.250011 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:15.250176 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHUsername
	I0830 22:38:15.250335 1000447 main.go:141] libmachine: Using SSH client type: native
	I0830 22:38:15.250919 1000447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0830 22:38:15.250945 1000447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:38:15.562693 1000447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:38:15.562726 1000447 main.go:141] libmachine: Checking connection to Docker...
	I0830 22:38:15.562735 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetURL
	I0830 22:38:15.564317 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | Using libvirt version 6000000
	I0830 22:38:15.566312 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.566737 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:15.566774 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.566927 1000447 main.go:141] libmachine: Docker is up and running!
	I0830 22:38:15.566941 1000447 main.go:141] libmachine: Reticulating splines...
	I0830 22:38:15.566949 1000447 client.go:171] LocalClient.Create took 26.735972284s
	I0830 22:38:15.566971 1000447 start.go:167] duration metric: libmachine.API.Create for "newest-cni-618803" took 26.736046019s
	I0830 22:38:15.566980 1000447 start.go:300] post-start starting for "newest-cni-618803" (driver="kvm2")
	I0830 22:38:15.566990 1000447 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:38:15.567008 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:38:15.567270 1000447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:38:15.567305 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:15.569607 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.569958 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:15.570002 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.570077 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHPort
	I0830 22:38:15.570280 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:15.570450 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHUsername
	I0830 22:38:15.570609 1000447 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803/id_rsa Username:docker}
	I0830 22:38:15.661142 1000447 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:38:15.665482 1000447 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:38:15.665509 1000447 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:38:15.665598 1000447 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:38:15.665686 1000447 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:38:15.665807 1000447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:38:15.674148 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:38:15.697378 1000447 start.go:303] post-start completed in 130.381816ms
	I0830 22:38:15.697441 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetConfigRaw
	I0830 22:38:15.698009 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetIP
	I0830 22:38:15.700568 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.700888 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:15.700922 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.701199 1000447 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/config.json ...
	I0830 22:38:15.701388 1000447 start.go:128] duration metric: createHost completed in 26.889137687s
	I0830 22:38:15.701435 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:15.703607 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.704022 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:15.704065 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.704193 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHPort
	I0830 22:38:15.704397 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:15.704565 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:15.704679 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHUsername
	I0830 22:38:15.704818 1000447 main.go:141] libmachine: Using SSH client type: native
	I0830 22:38:15.705229 1000447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0830 22:38:15.705244 1000447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:38:15.828794 1000447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693435095.812677109
	
	I0830 22:38:15.828822 1000447 fix.go:206] guest clock: 1693435095.812677109
	I0830 22:38:15.828832 1000447 fix.go:219] Guest: 2023-08-30 22:38:15.812677109 +0000 UTC Remote: 2023-08-30 22:38:15.701423274 +0000 UTC m=+27.014801784 (delta=111.253835ms)
	I0830 22:38:15.828916 1000447 fix.go:190] guest clock delta is within tolerance: 111.253835ms
	I0830 22:38:15.828923 1000447 start.go:83] releasing machines lock for "newest-cni-618803", held for 27.016788948s
	I0830 22:38:15.828958 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:38:15.829262 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetIP
	I0830 22:38:15.832127 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.832486 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:15.832519 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.832634 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:38:15.833148 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:38:15.833344 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .DriverName
	I0830 22:38:15.833455 1000447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:38:15.833495 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:15.833567 1000447 ssh_runner.go:195] Run: cat /version.json
	I0830 22:38:15.833583 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHHostname
	I0830 22:38:15.836296 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.836478 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.836692 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:15.836723 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.836903 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHPort
	I0830 22:38:15.837009 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:15.837046 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:15.837083 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:15.837161 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHPort
	I0830 22:38:15.837238 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHUsername
	I0830 22:38:15.837308 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHKeyPath
	I0830 22:38:15.837352 1000447 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803/id_rsa Username:docker}
	I0830 22:38:15.837405 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetSSHUsername
	I0830 22:38:15.837541 1000447 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/newest-cni-618803/id_rsa Username:docker}
	I0830 22:38:15.945260 1000447 ssh_runner.go:195] Run: systemctl --version
	I0830 22:38:15.951197 1000447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:38:16.112099 1000447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:38:16.118380 1000447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:38:16.118451 1000447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:38:16.134463 1000447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:38:16.134512 1000447 start.go:466] detecting cgroup driver to use...
	I0830 22:38:16.134578 1000447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:38:16.148256 1000447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:38:16.160714 1000447 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:38:16.160779 1000447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:38:16.173688 1000447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:38:16.186614 1000447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:38:16.300002 1000447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:38:16.426274 1000447 docker.go:212] disabling docker service ...
	I0830 22:38:16.426367 1000447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:38:16.440761 1000447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:38:16.452439 1000447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:38:16.587045 1000447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:38:16.715659 1000447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:38:16.728821 1000447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:38:16.746225 1000447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:38:16.746296 1000447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:38:16.756115 1000447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:38:16.756186 1000447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:38:16.765850 1000447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:38:16.774942 1000447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:38:16.785437 1000447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:38:16.795098 1000447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:38:16.803113 1000447 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:38:16.803155 1000447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:38:16.815683 1000447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:38:16.825154 1000447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:38:16.954683 1000447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:38:17.128039 1000447 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:38:17.128128 1000447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:38:17.133373 1000447 start.go:534] Will wait 60s for crictl version
	I0830 22:38:17.133433 1000447 ssh_runner.go:195] Run: which crictl
	I0830 22:38:17.137640 1000447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:38:17.172140 1000447 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:38:17.172224 1000447 ssh_runner.go:195] Run: crio --version
	I0830 22:38:17.222148 1000447 ssh_runner.go:195] Run: crio --version
	I0830 22:38:17.286827 1000447 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:38:17.288335 1000447 main.go:141] libmachine: (newest-cni-618803) Calling .GetIP
	I0830 22:38:17.291025 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:17.291355 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:c3:8e", ip: ""} in network mk-newest-cni-618803: {Iface:virbr1 ExpiryTime:2023-08-30 23:38:05 +0000 UTC Type:0 Mac:52:54:00:68:c3:8e Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:newest-cni-618803 Clientid:01:52:54:00:68:c3:8e}
	I0830 22:38:17.291392 1000447 main.go:141] libmachine: (newest-cni-618803) DBG | domain newest-cni-618803 has defined IP address 192.168.39.211 and MAC address 52:54:00:68:c3:8e in network mk-newest-cni-618803
	I0830 22:38:17.291612 1000447 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 22:38:17.295681 1000447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:38:17.309215 1000447 localpath.go:92] copying /home/jenkins/minikube-integration/17114-955377/.minikube/client.crt -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/client.crt
	I0830 22:38:17.309407 1000447 localpath.go:117] copying /home/jenkins/minikube-integration/17114-955377/.minikube/client.key -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/client.key
	I0830 22:38:17.311433 1000447 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0830 22:38:17.312695 1000447 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:38:17.312759 1000447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:38:17.338972 1000447 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:38:17.339063 1000447 ssh_runner.go:195] Run: which lz4
	I0830 22:38:17.343220 1000447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:38:17.347688 1000447 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:38:17.347722 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 22:38:19.182672 1000447 crio.go:444] Took 1.839483 seconds to copy over tarball
	I0830 22:38:19.182775 1000447 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:38:22.276634 1000447 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.09382918s)
	I0830 22:38:22.276661 1000447 crio.go:451] Took 3.093949 seconds to extract the tarball
	I0830 22:38:22.276677 1000447 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:38:22.317233 1000447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:38:22.386907 1000447 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:38:22.386936 1000447 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:38:22.387030 1000447 ssh_runner.go:195] Run: crio config
	I0830 22:38:22.449195 1000447 cni.go:84] Creating CNI manager for ""
	I0830 22:38:22.449216 1000447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:38:22.449235 1000447 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0830 22:38:22.449254 1000447 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-618803 NodeName:newest-cni-618803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:38:22.449395 1000447 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-618803"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:38:22.449474 1000447 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-618803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:newest-cni-618803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:38:22.449541 1000447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:38:22.459182 1000447 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:38:22.459287 1000447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:38:22.468675 1000447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0830 22:38:22.485643 1000447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:38:22.502433 1000447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0830 22:38:22.519385 1000447 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0830 22:38:22.523432 1000447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:38:22.537081 1000447 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803 for IP: 192.168.39.211
	I0830 22:38:22.537125 1000447 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:38:22.537347 1000447 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:38:22.537406 1000447 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:38:22.537519 1000447 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/client.key
	I0830 22:38:22.537555 1000447 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.key.5ce1f1d4
	I0830 22:38:22.537574 1000447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.crt.5ce1f1d4 with IP's: [192.168.39.211 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 22:38:22.907749 1000447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.crt.5ce1f1d4 ...
	I0830 22:38:22.907793 1000447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.crt.5ce1f1d4: {Name:mk86f13d407785ee90e55d6ec40b7ae8b6a4bc6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:38:22.907992 1000447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.key.5ce1f1d4 ...
	I0830 22:38:22.908006 1000447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.key.5ce1f1d4: {Name:mka2417e0e2c827e27f348a4c6548f15c50324d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:38:22.908108 1000447 certs.go:337] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.crt.5ce1f1d4 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.crt
	I0830 22:38:22.908202 1000447 certs.go:341] copying /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.key.5ce1f1d4 -> /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.key
	I0830 22:38:22.908296 1000447 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/proxy-client.key
	I0830 22:38:22.908321 1000447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/proxy-client.crt with IP's: []
	I0830 22:38:23.024652 1000447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/proxy-client.crt ...
	I0830 22:38:23.024692 1000447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/proxy-client.crt: {Name:mk14267788835125cbca208155d7e118f9cbf1b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:38:23.024878 1000447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/proxy-client.key ...
	I0830 22:38:23.024899 1000447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/proxy-client.key: {Name:mkbd7474bc7356c134198406b0bfe5d55f070212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:38:23.025132 1000447 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:38:23.025177 1000447 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:38:23.025198 1000447 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:38:23.025231 1000447 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:38:23.025260 1000447 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:38:23.025292 1000447 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:38:23.025348 1000447 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:38:23.025937 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:38:23.054081 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:38:23.083065 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:38:23.110445 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/newest-cni-618803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:38:23.136635 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:38:23.163616 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:38:23.189109 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:38:23.212906 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:38:23.241103 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:38:23.266565 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:38:23.292850 1000447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:38:23.321080 1000447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:38:23.340326 1000447 ssh_runner.go:195] Run: openssl version
	I0830 22:38:23.348186 1000447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:38:23.359195 1000447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:38:23.364251 1000447 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:38:23.364297 1000447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:38:23.370263 1000447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:38:23.380404 1000447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:38:23.390223 1000447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:38:23.394837 1000447 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:38:23.394892 1000447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:38:23.400489 1000447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:38:23.410049 1000447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:38:23.420061 1000447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:38:23.424647 1000447 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:38:23.424702 1000447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:38:23.430268 1000447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:38:23.440066 1000447 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:38:23.444298 1000447 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 22:38:23.444360 1000447 kubeadm.go:404] StartCluster: {Name:newest-cni-618803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:newest-cni-618803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:38:23.444474 1000447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:38:23.444560 1000447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:38:23.477500 1000447 cri.go:89] found id: ""
	I0830 22:38:23.477583 1000447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:38:23.487801 1000447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:38:23.497410 1000447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:38:23.508145 1000447 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:38:23.508201 1000447 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 22:38:23.616830 1000447 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 22:38:23.616950 1000447 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:38:23.863649 1000447 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:38:23.978249 1000447 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:38:23.978421 1000447 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:38:24.035678 1000447 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:19:37 UTC, ends at Wed 2023-08-30 22:38:25 UTC. --
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.880106691Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&PodSandboxMetadata{Name:busybox,Uid:b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434037809415607,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T22:20:29.824023793Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-hlwf8,Uid:cdc95a13-1a94-4113-9ec0-569de1c5f49b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:16934340375059024
29,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T22:20:29.824039206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14eb63da8e8f9c6492592b231626e4ec1d180f9793a2bb54ae885ce9b27f0acb,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-nfbkd,Uid:450f12e3-6554-41c5-9d41-bee735b322b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434034903911761,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-nfbkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 450f12e3-6554-41c5-9d41-bee735b322b3,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T22:20:29.8
24025860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&PodSandboxMetadata{Name:kube-proxy-5fjvd,Uid:e0c2f2a2-2a89-4f00-8e87-76103160ab55,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434030182517117,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2a89-4f00-8e87-76103160ab55,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-08-30T22:20:29.824033924Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c4465b2a-7390-417f-b9ba-f39062e6d685,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434030159063567,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2023-08-30T22:20:29.824012538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-698195,Uid:227ec40ce81927044185b55d500f6322,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434023384077247,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce81927044185b55d500f6322,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 227ec40ce81927044185b55d500f6322,kubernetes.io/config.seen: 2023-08-30T22:20:22.830064878Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-698195,Uid:49f495b52540766d40f90f3b9a653d92,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434023370986828,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 49f495b52540766d40f90f3b9a653d92,kubernetes.io/config.seen: 2023-08-30T22:20:22.830063639Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-698195,Uid:f53c3a1f3d07438134fce272398b68a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434023344895331,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: f53c3a1f3d07438134fce272398b68a4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.28:8443,kubernetes.io/config.hash: f53c3a1f3d07438134fce272398b68a4,kubernetes.io/config.seen: 2023-08-30T22:20:22.830062067Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-698195,Uid:efd97f2bb4227265e0549f42642e2bed,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1693434023341263773,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.28:2379,kubernetes.io/config.hash: efd97f2bb4227265e0549f42642e2bed,kube
rnetes.io/config.seen: 2023-08-30T22:20:22.830055118Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=c5bd2080-b02a-4875-bfde-678dcb090eb4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.880961888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=55addac5-9240-4c04-b16d-119058f5a17a name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.881044172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=55addac5-9240-4c04-b16d-119058f5a17a name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.881215444Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce81927044
185b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.
kubernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=55addac5-9240-4c04-b16d-119058f5a17a name=/runtime.v1.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.889615922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eabb423a-dd97-4aaf-9571-e80b464e65e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.889731937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eabb423a-dd97-4aaf-9571-e80b464e65e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.890017542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eabb423a-dd97-4aaf-9571-e80b464e65e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.927188957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8bd96639-39a2-4e83-b231-a155330a2531 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.927276982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8bd96639-39a2-4e83-b231-a155330a2531 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.927474938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8bd96639-39a2-4e83-b231-a155330a2531 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.960798880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e456f068-4aa1-4358-839d-b9af00817b6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.960945052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e456f068-4aa1-4358-839d-b9af00817b6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:24 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.961202586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e456f068-4aa1-4358-839d-b9af00817b6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:24.999949235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fc6f35c0-b020-4d09-9955-0d2b0fe9ff60 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.000066950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fc6f35c0-b020-4d09-9955-0d2b0fe9ff60 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.000306553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fc6f35c0-b020-4d09-9955-0d2b0fe9ff60 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.032818256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a5c55fa3-5c74-4aa3-bb80-8a591ed70a1f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.033034593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a5c55fa3-5c74-4aa3-bb80-8a591ed70a1f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.033223342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a5c55fa3-5c74-4aa3-bb80-8a591ed70a1f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.068041649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ba767dde-4c9e-46fa-86e0-3b7d52e98c94 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.068137935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ba767dde-4c9e-46fa-86e0-3b7d52e98c94 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.068392493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ba767dde-4c9e-46fa-86e0-3b7d52e98c94 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.098719386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e6057316-a9f7-4b09-ae3c-5420563a584c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.098810712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e6057316-a9f7-4b09-ae3c-5420563a584c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:38:25 no-preload-698195 crio[727]: time="2023-08-30 22:38:25.099126416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1693434062109326453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-7390-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bd91d1795dd90db68a533df9e9cbfa187dc0fd62ed528757aa149941d4ac9f,PodSandboxId:20b13e9db98e1ac521f705ccf2e8dccc4c931fdfb1191581d92a2f981768675d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1693434039795269782,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6f48515-4a8e-4f84-8760-4f3b9b12b4d5,},Annotations:map[string]string{io.kubernetes.container.hash: ec032299,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615,PodSandboxId:97f611c774cf75beb65da3ccb117dd498728cf290aeef42da70efdbdb3f7dac9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1693434038175906784,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlwf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdc95a13-1a94-4113-9ec0-569de1c5f49b,},Annotations:map[string]string{io.kubernetes.container.hash: 49c1a5ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3,PodSandboxId:67a8ad99cda129d8597c27eb24dff32ce386a813af7bee3138170c1867aad038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1693434031079918635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fjvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c2f2a2-2
a89-4f00-8e87-76103160ab55,},Annotations:map[string]string{io.kubernetes.container.hash: dd821f17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6,PodSandboxId:8be5ef26dacaea4393cb641c8079d908bd4de283e875b0cbb316c96dfc153215,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1693434030869725597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4465b2a-739
0-417f-b9ba-f39062e6d685,},Annotations:map[string]string{io.kubernetes.container.hash: a96dcf4c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6,PodSandboxId:578a57c0880beda9785b8b392affaeddf475d83c4201aa7492e18b190c6a9cec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1693434024514689905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227ec40ce8192704418
5b55d500f6322,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2,PodSandboxId:e0d15cc034086055549fefe6ad4c66e4c5d25d21d84f0e85ea7d8c69a4fbdefb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1693434024238808165,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efd97f2bb4227265e0549f42642e2bed,},Annotations:map[string]string{io.ku
bernetes.container.hash: d0bb62c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512,PodSandboxId:2d110240fe0d0e753c69bed3969df0528403b2689c4c206ce8e8da62cf1579aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1693434024117767939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49f495b52540766d40f90f3b9a653d92,},Annotation
s:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373,PodSandboxId:a1ae5ec95669ac82ccde5292bbc6fcefd2eba272ec6e741898741c86da192110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1693434023799319526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-698195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f53c3a1f3d07438134fce272398b68a4,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 55c8e156,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e6057316-a9f7-4b09-ae3c-5420563a584c name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	a4ec3add6f727       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner       2                   8be5ef26dacae
	b0bd91d1795dd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   17 minutes ago      Running             busybox                   1                   20b13e9db98e1
	61c09841e92e9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      17 minutes ago      Running             coredns                   1                   97f611c774cf7
	2fe23692aaba2       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      17 minutes ago      Running             kube-proxy                1                   67a8ad99cda12
	c00d7aca5019d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Exited              storage-provisioner       1                   8be5ef26dacae
	94b2663b3d51d       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      18 minutes ago      Running             kube-scheduler            1                   578a57c0880be
	c6594d2e258e6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      18 minutes ago      Running             etcd                      1                   e0d15cc034086
	5f90117987e5b       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      18 minutes ago      Running             kube-controller-manager   1                   2d110240fe0d0
	2aff15ad720bf       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      18 minutes ago      Running             kube-apiserver            1                   a1ae5ec95669a
	
	* 
	* ==> coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41758 - 13562 "HINFO IN 1972653659024392533.621617805422138747. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011709872s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-698195
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-698195
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=no-preload-698195
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T22_10_11_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:10:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-698195
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 22:38:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:36:15 +0000   Wed, 30 Aug 2023 22:10:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:36:15 +0000   Wed, 30 Aug 2023 22:10:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:36:15 +0000   Wed, 30 Aug 2023 22:10:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:36:15 +0000   Wed, 30 Aug 2023 22:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.28
	  Hostname:    no-preload-698195
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 182cdf5ac5c54a6098509e831cd9b243
	  System UUID:                182cdf5a-c5c5-4a60-9850-9e831cd9b243
	  Boot ID:                    8c07ffbf-69f6-418f-9bc0-2a9d95262b85
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5dd5756b68-hlwf8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-698195                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-698195             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-698195    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-5fjvd                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-698195             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-nfbkd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-698195 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-698195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-698195 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-698195 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-698195 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-698195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-698195 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-698195 event: Registered Node no-preload-698195 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-698195 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-698195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-698195 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node no-preload-698195 event: Registered Node no-preload-698195 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug30 22:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074697] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.774743] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.472144] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154231] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.565919] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.313902] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.097291] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.142301] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.124993] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.276448] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[Aug30 22:20] systemd-fstab-generator[1233]: Ignoring "noauto" for root device
	[ +15.057739] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] <==
	* {"level":"info","ts":"2023-08-30T22:20:26.1479Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-30T22:20:26.147954Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.28:2380"}
	{"level":"info","ts":"2023-08-30T22:20:26.14796Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.28:2380"}
	{"level":"info","ts":"2023-08-30T22:20:27.379084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-30T22:20:27.379134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-30T22:20:27.379173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 received MsgPreVoteResp from dd3f57cb1d137e03 at term 2"}
	{"level":"info","ts":"2023-08-30T22:20:27.37919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 became candidate at term 3"}
	{"level":"info","ts":"2023-08-30T22:20:27.379196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 received MsgVoteResp from dd3f57cb1d137e03 at term 3"}
	{"level":"info","ts":"2023-08-30T22:20:27.379204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd3f57cb1d137e03 became leader at term 3"}
	{"level":"info","ts":"2023-08-30T22:20:27.379211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dd3f57cb1d137e03 elected leader dd3f57cb1d137e03 at term 3"}
	{"level":"info","ts":"2023-08-30T22:20:27.381Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dd3f57cb1d137e03","local-member-attributes":"{Name:no-preload-698195 ClientURLs:[https://192.168.72.28:2379]}","request-path":"/0/members/dd3f57cb1d137e03/attributes","cluster-id":"2d3b68a7afbccf5b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T22:20:27.381187Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:20:27.381145Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:20:27.382433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.28:2379"}
	{"level":"info","ts":"2023-08-30T22:20:27.382628Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T22:20:27.382667Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-30T22:20:27.38354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-30T22:30:27.420635Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":864}
	{"level":"info","ts":"2023-08-30T22:30:27.424001Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":864,"took":"2.768736ms","hash":3181232975}
	{"level":"info","ts":"2023-08-30T22:30:27.424164Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3181232975,"revision":864,"compact-revision":-1}
	{"level":"info","ts":"2023-08-30T22:35:27.430309Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1106}
	{"level":"info","ts":"2023-08-30T22:35:27.43251Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1106,"took":"1.706502ms","hash":2670289344}
	{"level":"info","ts":"2023-08-30T22:35:27.432609Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2670289344,"revision":1106,"compact-revision":864}
	{"level":"info","ts":"2023-08-30T22:38:23.836437Z","caller":"traceutil/trace.go:171","msg":"trace[1823451562] transaction","detail":"{read_only:false; response_revision:1494; number_of_response:1; }","duration":"340.756888ms","start":"2023-08-30T22:38:23.495607Z","end":"2023-08-30T22:38:23.836364Z","steps":["trace[1823451562] 'process raft request'  (duration: 340.446722ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T22:38:23.838322Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-08-30T22:38:23.49559Z","time spent":"341.243321ms","remote":"127.0.0.1:43144","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1492 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	* 
	* ==> kernel <==
	*  22:38:25 up 18 min,  0 users,  load average: 0.43, 0.20, 0.12
	Linux no-preload-698195 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] <==
	* I0830 22:35:28.868782       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.79.55:443: connect: connection refused
	I0830 22:35:28.869130       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0830 22:35:29.010354       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:35:29.010489       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:35:29.010983       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.79.55:443: connect: connection refused
	I0830 22:35:29.010999       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0830 22:35:30.011714       1 handler_proxy.go:93] no RequestInfo found in the context
	W0830 22:35:30.011823       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:35:30.011955       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0830 22:35:30.011967       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0830 22:35:30.012048       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:35:30.013354       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:36:28.869097       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.79.55:443: connect: connection refused
	I0830 22:36:28.869242       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0830 22:36:30.012571       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:36:30.012784       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0830 22:36:30.012816       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0830 22:36:30.013578       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:36:30.013666       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:36:30.014740       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:37:28.868167       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.79.55:443: connect: connection refused
	I0830 22:37:28.868349       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] <==
	* I0830 22:32:42.327240       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:33:11.800071       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:33:12.335625       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:33:41.806072       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:33:42.345994       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:34:11.812138       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:34:12.354487       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:34:41.821381       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:34:42.363337       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:35:11.826533       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:35:12.372771       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:35:41.832128       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:35:42.381038       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:36:11.838350       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:36:12.389201       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:36:41.844778       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:36:42.399441       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0830 22:37:01.902584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="285.752µs"
	E0830 22:37:11.850434       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:37:12.410031       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0830 22:37:16.903259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="147.027µs"
	E0830 22:37:41.856206       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:37:42.419440       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0830 22:38:11.861436       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0830 22:38:12.430804       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] <==
	* I0830 22:20:31.246950       1 server_others.go:69] "Using iptables proxy"
	I0830 22:20:31.257680       1 node.go:141] Successfully retrieved node IP: 192.168.72.28
	I0830 22:20:31.297066       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0830 22:20:31.297118       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0830 22:20:31.300119       1 server_others.go:152] "Using iptables Proxier"
	I0830 22:20:31.300174       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 22:20:31.300460       1 server.go:846] "Version info" version="v1.28.1"
	I0830 22:20:31.300498       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:20:31.301366       1 config.go:188] "Starting service config controller"
	I0830 22:20:31.301407       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 22:20:31.301430       1 config.go:97] "Starting endpoint slice config controller"
	I0830 22:20:31.301433       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 22:20:31.301967       1 config.go:315] "Starting node config controller"
	I0830 22:20:31.302001       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 22:20:31.401958       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 22:20:31.402033       1 shared_informer.go:318] Caches are synced for node config
	I0830 22:20:31.402056       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] <==
	* I0830 22:20:26.411312       1 serving.go:348] Generated self-signed cert in-memory
	W0830 22:20:28.972992       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0830 22:20:28.973159       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 22:20:28.973197       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0830 22:20:28.973222       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0830 22:20:29.012813       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0830 22:20:29.012979       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:20:29.015367       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0830 22:20:29.022455       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0830 22:20:29.022508       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 22:20:29.022536       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0830 22:20:29.122631       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:19:37 UTC, ends at Wed 2023-08-30 22:38:25 UTC. --
	Aug 30 22:36:18 no-preload-698195 kubelet[1239]: E0830 22:36:18.880258    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:36:22 no-preload-698195 kubelet[1239]: E0830 22:36:22.906908    1239 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:36:22 no-preload-698195 kubelet[1239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:36:22 no-preload-698195 kubelet[1239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:36:22 no-preload-698195 kubelet[1239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:36:33 no-preload-698195 kubelet[1239]: E0830 22:36:33.880272    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:36:47 no-preload-698195 kubelet[1239]: E0830 22:36:47.894419    1239 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 30 22:36:47 no-preload-698195 kubelet[1239]: E0830 22:36:47.894497    1239 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 30 22:36:47 no-preload-698195 kubelet[1239]: E0830 22:36:47.894735    1239 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9jbm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-nfbkd_kube-system(450f12e3-6554-41c5-9d41-bee735b322b3): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 30 22:36:47 no-preload-698195 kubelet[1239]: E0830 22:36:47.894782    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:37:01 no-preload-698195 kubelet[1239]: E0830 22:37:01.879974    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:37:16 no-preload-698195 kubelet[1239]: E0830 22:37:16.880646    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:37:22 no-preload-698195 kubelet[1239]: E0830 22:37:22.906460    1239 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:37:22 no-preload-698195 kubelet[1239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:37:22 no-preload-698195 kubelet[1239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:37:22 no-preload-698195 kubelet[1239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:37:28 no-preload-698195 kubelet[1239]: E0830 22:37:28.882557    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:37:40 no-preload-698195 kubelet[1239]: E0830 22:37:40.880919    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:37:55 no-preload-698195 kubelet[1239]: E0830 22:37:55.880063    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:38:09 no-preload-698195 kubelet[1239]: E0830 22:38:09.879747    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	Aug 30 22:38:22 no-preload-698195 kubelet[1239]: E0830 22:38:22.909229    1239 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 30 22:38:22 no-preload-698195 kubelet[1239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 30 22:38:22 no-preload-698195 kubelet[1239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 30 22:38:22 no-preload-698195 kubelet[1239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 30 22:38:23 no-preload-698195 kubelet[1239]: E0830 22:38:23.881070    1239 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nfbkd" podUID="450f12e3-6554-41c5-9d41-bee735b322b3"
	
	* 
	* ==> storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] <==
	* I0830 22:21:02.231656       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 22:21:02.242487       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 22:21:02.243054       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 22:21:19.652627       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 22:21:19.652781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-698195_2bcc01c2-3faf-4b4a-b8fc-398575ecdd81!
	I0830 22:21:19.654372       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"315c45da-d624-46fd-99d0-dac8a2bd8ebf", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-698195_2bcc01c2-3faf-4b4a-b8fc-398575ecdd81 became leader
	I0830 22:21:19.753178       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-698195_2bcc01c2-3faf-4b4a-b8fc-398575ecdd81!
	
	* 
	* ==> storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] <==
	* I0830 22:20:31.096661       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0830 22:21:01.100252       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-698195 -n no-preload-698195
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-698195 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nfbkd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-698195 describe pod metrics-server-57f55c9bc5-nfbkd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-698195 describe pod metrics-server-57f55c9bc5-nfbkd: exit status 1 (68.603509ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nfbkd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-698195 describe pod metrics-server-57f55c9bc5-nfbkd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (267.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (128.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-250163 -n old-k8s-version-250163
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-08-30 22:37:45.074160661 +0000 UTC m=+5307.266910584
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-250163 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-250163 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.373µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-250163 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-250163 -n old-k8s-version-250163
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-250163 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-250163 logs -n 25: (1.338867845s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-519738 -- sudo                         | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-519738                                 | cert-options-519738          | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:08 UTC |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:08 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-184733                              | stopped-upgrade-184733       | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:09 UTC |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:09 UTC | 30 Aug 23 22:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-693390                              | cert-expiration-693390       | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	| delete  | -p                                                     | disable-driver-mounts-883991 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:10 UTC |
	|         | disable-driver-mounts-883991                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:10 UTC | 30 Aug 23 22:12 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-698195             | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-208903            | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC | 30 Aug 23 22:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-791007  | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC | 30 Aug 23 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:12 UTC |                     |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-698195                  | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-208903                 | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-698195                                   | no-preload-698195            | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC | 30 Aug 23 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-208903                                  | embed-certs-208903           | jenkins | v1.31.2 | 30 Aug 23 22:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-250163        | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC | 30 Aug 23 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-791007       | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-791007 | jenkins | v1.31.2 | 30 Aug 23 22:15 UTC | 30 Aug 23 22:24 UTC |
	|         | default-k8s-diff-port-791007                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-250163             | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-250163                              | old-k8s-version-250163       | jenkins | v1.31.2 | 30 Aug 23 22:16 UTC | 30 Aug 23 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:16:59
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:16:59.758341  995603 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:16:59.758470  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758479  995603 out.go:309] Setting ErrFile to fd 2...
	I0830 22:16:59.758484  995603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:16:59.758692  995603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:16:59.759241  995603 out.go:303] Setting JSON to false
	I0830 22:16:59.760232  995603 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":14367,"bootTime":1693419453,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:16:59.760291  995603 start.go:138] virtualization: kvm guest
	I0830 22:16:59.762744  995603 out.go:177] * [old-k8s-version-250163] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:16:59.764395  995603 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:16:59.765863  995603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:16:59.764404  995603 notify.go:220] Checking for updates...
	I0830 22:16:59.767579  995603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:16:59.769244  995603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:16:59.771001  995603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:16:59.772625  995603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:16:59.774574  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:16:59.774929  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.775032  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.790271  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0830 22:16:59.790677  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.791257  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.791283  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.791645  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.791879  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.793885  995603 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:16:59.795414  995603 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:16:59.795716  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:16:59.795752  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:16:59.810316  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0830 22:16:59.810694  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:16:59.811176  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:16:59.811201  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:16:59.811560  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:16:59.811808  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:16:59.845962  995603 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 22:16:59.847399  995603 start.go:298] selected driver: kvm2
	I0830 22:16:59.847410  995603 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.847546  995603 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:16:59.848301  995603 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.848376  995603 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 22:16:59.862654  995603 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 22:16:59.863040  995603 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 22:16:59.863080  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:16:59.863094  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:16:59.863109  995603 start_flags.go:319] config:
	{Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:16:59.863341  995603 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:16:59.865500  995603 out.go:177] * Starting control plane node old-k8s-version-250163 in cluster old-k8s-version-250163
	I0830 22:17:00.916070  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:16:59.866763  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:16:59.866836  995603 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0830 22:16:59.866852  995603 cache.go:57] Caching tarball of preloaded images
	I0830 22:16:59.866935  995603 preload.go:174] Found /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0830 22:16:59.866946  995603 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0830 22:16:59.867091  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:16:59.867314  995603 start.go:365] acquiring machines lock for old-k8s-version-250163: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:17:06.996025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:10.068036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:16.148043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:19.220024  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:25.300036  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:28.372088  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:34.452043  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:37.524037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:43.604037  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:46.676107  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:52.756100  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:17:55.828195  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:01.908025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:04.980079  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:11.060035  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:14.132025  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:20.212050  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:23.283995  994624 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.28:22: connect: no route to host
	I0830 22:18:26.288205  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 4m29.4670209s
	I0830 22:18:26.288261  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:26.288276  994705 fix.go:54] fixHost starting: 
	I0830 22:18:26.288621  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:26.288656  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:26.304048  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0830 22:18:26.304613  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:26.305138  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:18:26.305164  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:26.305518  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:26.305719  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:26.305843  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:18:26.307597  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Stopped err=<nil>
	I0830 22:18:26.307639  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	W0830 22:18:26.307827  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:26.309985  994705 out.go:177] * Restarting existing kvm2 VM for "embed-certs-208903" ...
	I0830 22:18:26.311551  994705 main.go:141] libmachine: (embed-certs-208903) Calling .Start
	I0830 22:18:26.311750  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring networks are active...
	I0830 22:18:26.312528  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network default is active
	I0830 22:18:26.312814  994705 main.go:141] libmachine: (embed-certs-208903) Ensuring network mk-embed-certs-208903 is active
	I0830 22:18:26.313153  994705 main.go:141] libmachine: (embed-certs-208903) Getting domain xml...
	I0830 22:18:26.313857  994705 main.go:141] libmachine: (embed-certs-208903) Creating domain...
	I0830 22:18:26.285881  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:26.285939  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:18:26.288013  994624 machine.go:91] provisioned docker machine in 4m37.410947228s
	I0830 22:18:26.288063  994624 fix.go:56] fixHost completed within 4m37.432260867s
	I0830 22:18:26.288085  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 4m37.432330775s
	W0830 22:18:26.288107  994624 start.go:672] error starting host: provision: host is not running
	W0830 22:18:26.288219  994624 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0830 22:18:26.288225  994624 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:27.529120  994705 main.go:141] libmachine: (embed-certs-208903) Waiting to get IP...
	I0830 22:18:27.530028  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.530390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.530515  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.530404  996319 retry.go:31] will retry after 311.351139ms: waiting for machine to come up
	I0830 22:18:27.843013  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:27.843398  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:27.843427  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:27.843337  996319 retry.go:31] will retry after 367.953943ms: waiting for machine to come up
	I0830 22:18:28.213214  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.213785  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.213820  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.213722  996319 retry.go:31] will retry after 424.275825ms: waiting for machine to come up
	I0830 22:18:28.639216  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:28.639670  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:28.639707  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:28.639609  996319 retry.go:31] will retry after 502.321201ms: waiting for machine to come up
	I0830 22:18:29.143240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.143823  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.143850  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.143790  996319 retry.go:31] will retry after 680.495047ms: waiting for machine to come up
	I0830 22:18:29.825462  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:29.825879  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:29.825904  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:29.825836  996319 retry.go:31] will retry after 756.63617ms: waiting for machine to come up
	I0830 22:18:30.583723  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:30.584179  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:30.584212  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:30.584118  996319 retry.go:31] will retry after 851.722792ms: waiting for machine to come up
	I0830 22:18:31.437603  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:31.438031  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:31.438063  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:31.437986  996319 retry.go:31] will retry after 1.214893807s: waiting for machine to come up
	I0830 22:18:31.289961  994624 start.go:365] acquiring machines lock for no-preload-698195: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:32.654351  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:32.654803  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:32.654829  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:32.654756  996319 retry.go:31] will retry after 1.574180335s: waiting for machine to come up
	I0830 22:18:34.231491  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:34.231911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:34.231944  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:34.231826  996319 retry.go:31] will retry after 1.99107048s: waiting for machine to come up
	I0830 22:18:36.225911  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:36.226336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:36.226363  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:36.226283  996319 retry.go:31] will retry after 1.816508761s: waiting for machine to come up
	I0830 22:18:38.044672  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:38.045061  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:38.045094  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:38.045021  996319 retry.go:31] will retry after 2.343148299s: waiting for machine to come up
	I0830 22:18:40.389346  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:40.389753  994705 main.go:141] libmachine: (embed-certs-208903) DBG | unable to find current IP address of domain embed-certs-208903 in network mk-embed-certs-208903
	I0830 22:18:40.389778  994705 main.go:141] libmachine: (embed-certs-208903) DBG | I0830 22:18:40.389700  996319 retry.go:31] will retry after 3.682098761s: waiting for machine to come up
	I0830 22:18:45.025750  995192 start.go:369] acquired machines lock for "default-k8s-diff-port-791007" in 3m32.939054887s
	I0830 22:18:45.025823  995192 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:18:45.025847  995192 fix.go:54] fixHost starting: 
	I0830 22:18:45.026291  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:18:45.026333  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:18:45.041161  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0830 22:18:45.041657  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:18:45.042176  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:18:45.042208  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:18:45.042544  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:18:45.042748  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:18:45.042910  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:18:45.044428  995192 fix.go:102] recreateIfNeeded on default-k8s-diff-port-791007: state=Stopped err=<nil>
	I0830 22:18:45.044454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	W0830 22:18:45.044615  995192 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:18:45.046538  995192 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-791007" ...
	I0830 22:18:44.074916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075386  994705 main.go:141] libmachine: (embed-certs-208903) Found IP for machine: 192.168.50.159
	I0830 22:18:44.075411  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has current primary IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.075418  994705 main.go:141] libmachine: (embed-certs-208903) Reserving static IP address...
	I0830 22:18:44.075899  994705 main.go:141] libmachine: (embed-certs-208903) Reserved static IP address: 192.168.50.159
	I0830 22:18:44.075928  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.075939  994705 main.go:141] libmachine: (embed-certs-208903) Waiting for SSH to be available...
	I0830 22:18:44.075959  994705 main.go:141] libmachine: (embed-certs-208903) DBG | skip adding static IP to network mk-embed-certs-208903 - found existing host DHCP lease matching {name: "embed-certs-208903", mac: "52:54:00:07:50:90", ip: "192.168.50.159"}
	I0830 22:18:44.075968  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Getting to WaitForSSH function...
	I0830 22:18:44.078068  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078390  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.078436  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.078514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH client type: external
	I0830 22:18:44.078533  994705 main.go:141] libmachine: (embed-certs-208903) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa (-rw-------)
	I0830 22:18:44.078569  994705 main.go:141] libmachine: (embed-certs-208903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:18:44.078590  994705 main.go:141] libmachine: (embed-certs-208903) DBG | About to run SSH command:
	I0830 22:18:44.078622  994705 main.go:141] libmachine: (embed-certs-208903) DBG | exit 0
	I0830 22:18:44.167514  994705 main.go:141] libmachine: (embed-certs-208903) DBG | SSH cmd err, output: <nil>: 
	I0830 22:18:44.167898  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetConfigRaw
	I0830 22:18:44.168594  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.170974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171336  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.171370  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.171696  994705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/embed-certs-208903/config.json ...
	I0830 22:18:44.171967  994705 machine.go:88] provisioning docker machine ...
	I0830 22:18:44.171989  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:18:44.172184  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172371  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:18:44.172397  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.172563  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.174522  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174861  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.174894  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.174988  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.175159  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175286  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.175413  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.175627  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.176111  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.176132  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:18:44.309192  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:18:44.309225  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.311931  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312327  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.312362  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.312512  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.312727  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.312919  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.313048  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.313215  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.313623  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.313638  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:18:44.440529  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:18:44.440594  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:18:44.440641  994705 buildroot.go:174] setting up certificates
	I0830 22:18:44.440653  994705 provision.go:83] configureAuth start
	I0830 22:18:44.440663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:18:44.440943  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:18:44.443289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443663  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.443705  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.443805  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.445987  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446297  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.446328  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.446462  994705 provision.go:138] copyHostCerts
	I0830 22:18:44.446524  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:18:44.446550  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:18:44.446638  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:18:44.446750  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:18:44.446763  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:18:44.446800  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:18:44.446907  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:18:44.446919  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:18:44.446955  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:18:44.447036  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:18:44.664313  994705 provision.go:172] copyRemoteCerts
	I0830 22:18:44.664387  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:18:44.664434  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.666819  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667160  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.667192  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.667338  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.667565  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.667687  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.667839  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:18:44.756922  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:18:44.780430  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:18:44.803396  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:18:44.825975  994705 provision.go:86] duration metric: configureAuth took 385.307932ms
	I0830 22:18:44.826006  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:18:44.826230  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:18:44.826334  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:18:44.828862  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829199  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:18:44.829240  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:18:44.829383  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:18:44.829606  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829770  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:18:44.829907  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:18:44.830104  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:18:44.830593  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:18:44.830615  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:18:45.025539  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025585  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:18:45.025596  994705 machine.go:91] provisioned docker machine in 853.613637ms
	I0830 22:18:45.025627  994705 fix.go:56] fixHost completed within 18.737351046s
	I0830 22:18:45.025637  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 18.737393499s
	W0830 22:18:45.025662  994705 start.go:672] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	W0830 22:18:45.025746  994705 out.go:239] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:18:45.025760  994705 start.go:687] Will try again in 5 seconds ...
	I0830 22:18:45.047821  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Start
	I0830 22:18:45.047982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring networks are active...
	I0830 22:18:45.048684  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network default is active
	I0830 22:18:45.049040  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Ensuring network mk-default-k8s-diff-port-791007 is active
	I0830 22:18:45.049401  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Getting domain xml...
	I0830 22:18:45.050009  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Creating domain...
	I0830 22:18:46.288943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting to get IP...
	I0830 22:18:46.289982  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290359  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.290494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.290388  996430 retry.go:31] will retry after 228.105709ms: waiting for machine to come up
	I0830 22:18:46.519862  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520369  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.520389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.520342  996430 retry.go:31] will retry after 343.008473ms: waiting for machine to come up
	I0830 22:18:46.865023  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865426  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:46.865468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:46.865385  996430 retry.go:31] will retry after 467.017605ms: waiting for machine to come up
	I0830 22:18:50.028247  994705 start.go:365] acquiring machines lock for embed-certs-208903: {Name:mke6a9629606c547866f9277d26981e565442e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0830 22:18:47.334027  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334655  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.334682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.334600  996430 retry.go:31] will retry after 601.952764ms: waiting for machine to come up
	I0830 22:18:47.937980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938454  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:47.938494  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:47.938387  996430 retry.go:31] will retry after 556.18277ms: waiting for machine to come up
	I0830 22:18:48.495747  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496130  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:48.496184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:48.496101  996430 retry.go:31] will retry after 696.126701ms: waiting for machine to come up
	I0830 22:18:49.193405  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193789  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:49.193822  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:49.193752  996430 retry.go:31] will retry after 1.123021492s: waiting for machine to come up
	I0830 22:18:50.318326  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318682  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:50.318710  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:50.318637  996430 retry.go:31] will retry after 1.198520166s: waiting for machine to come up
	I0830 22:18:51.518959  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519302  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:51.519332  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:51.519244  996430 retry.go:31] will retry after 1.851352392s: waiting for machine to come up
	I0830 22:18:53.373208  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373676  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:53.373713  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:53.373594  996430 retry.go:31] will retry after 1.789163964s: waiting for machine to come up
	I0830 22:18:55.164132  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:55.164664  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:55.164587  996430 retry.go:31] will retry after 2.037803279s: waiting for machine to come up
	I0830 22:18:57.204503  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204957  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:18:57.204984  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:18:57.204919  996430 retry.go:31] will retry after 3.365492251s: waiting for machine to come up
	I0830 22:19:00.572195  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | unable to find current IP address of domain default-k8s-diff-port-791007 in network mk-default-k8s-diff-port-791007
	I0830 22:19:00.572634  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | I0830 22:19:00.572533  996430 retry.go:31] will retry after 3.57478782s: waiting for machine to come up
	I0830 22:19:05.536665  995603 start.go:369] acquired machines lock for "old-k8s-version-250163" in 2m5.669275373s
	I0830 22:19:05.536730  995603 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:05.536751  995603 fix.go:54] fixHost starting: 
	I0830 22:19:05.537197  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:05.537240  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:05.556581  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0830 22:19:05.557016  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:05.557559  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:19:05.557590  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:05.557937  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:05.558124  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:05.558290  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:19:05.559829  995603 fix.go:102] recreateIfNeeded on old-k8s-version-250163: state=Stopped err=<nil>
	I0830 22:19:05.559871  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	W0830 22:19:05.560056  995603 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:05.562726  995603 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-250163" ...
	I0830 22:19:04.151280  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.151787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Found IP for machine: 192.168.61.104
	I0830 22:19:04.151820  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserving static IP address...
	I0830 22:19:04.151839  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has current primary IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.152254  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.152286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Reserved static IP address: 192.168.61.104
	I0830 22:19:04.152306  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | skip adding static IP to network mk-default-k8s-diff-port-791007 - found existing host DHCP lease matching {name: "default-k8s-diff-port-791007", mac: "52:54:00:1e:2e:1e", ip: "192.168.61.104"}
	I0830 22:19:04.152324  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Waiting for SSH to be available...
	I0830 22:19:04.152339  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Getting to WaitForSSH function...
	I0830 22:19:04.154335  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.154701  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.154791  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH client type: external
	I0830 22:19:04.154833  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa (-rw-------)
	I0830 22:19:04.154852  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:04.154868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | About to run SSH command:
	I0830 22:19:04.154879  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | exit 0
	I0830 22:19:04.251692  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:04.252182  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetConfigRaw
	I0830 22:19:04.252842  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.255184  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255536  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.255571  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.255850  995192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/config.json ...
	I0830 22:19:04.256118  995192 machine.go:88] provisioning docker machine ...
	I0830 22:19:04.256143  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:04.256344  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256504  995192 buildroot.go:166] provisioning hostname "default-k8s-diff-port-791007"
	I0830 22:19:04.256525  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.256653  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.259010  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259366  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.259389  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.259509  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.259667  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.259943  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.260115  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.260787  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.260810  995192 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-791007 && echo "default-k8s-diff-port-791007" | sudo tee /etc/hostname
	I0830 22:19:04.403123  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-791007
	
	I0830 22:19:04.403166  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.405835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406219  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.406270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.406476  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.406704  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.406892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.407047  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.407233  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.407634  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.407658  995192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-791007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-791007/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-791007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:04.549964  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:04.550002  995192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:04.550039  995192 buildroot.go:174] setting up certificates
	I0830 22:19:04.550053  995192 provision.go:83] configureAuth start
	I0830 22:19:04.550071  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetMachineName
	I0830 22:19:04.550422  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:04.552844  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553116  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.553150  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.553313  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.555514  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.555880  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.555917  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.556036  995192 provision.go:138] copyHostCerts
	I0830 22:19:04.556100  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:04.556133  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:04.556213  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:04.556343  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:04.556354  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:04.556392  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:04.556485  995192 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:04.556496  995192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:04.556528  995192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:04.556607  995192 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-791007 san=[192.168.61.104 192.168.61.104 localhost 127.0.0.1 minikube default-k8s-diff-port-791007]
	I0830 22:19:04.756354  995192 provision.go:172] copyRemoteCerts
	I0830 22:19:04.756413  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:04.756438  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.759134  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759511  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.759544  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.759739  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.759977  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.760153  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.760297  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:04.858949  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:04.882455  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0830 22:19:04.905659  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:04.929876  995192 provision.go:86] duration metric: configureAuth took 379.794026ms
	I0830 22:19:04.929905  995192 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:04.930124  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:04.930228  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:04.932799  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933159  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:04.933192  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:04.933316  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:04.933531  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933703  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:04.933835  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:04.934015  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:04.934606  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:04.934633  995192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:05.266317  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:05.266349  995192 machine.go:91] provisioned docker machine in 1.010213866s
	I0830 22:19:05.266363  995192 start.go:300] post-start starting for "default-k8s-diff-port-791007" (driver="kvm2")
	I0830 22:19:05.266378  995192 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:05.266402  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.266764  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:05.266802  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.269938  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270300  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.270345  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.270472  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.270650  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.270800  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.270922  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.365334  995192 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:05.369583  995192 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:05.369608  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:05.369701  995192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:05.369790  995192 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:05.369879  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:05.377933  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:05.401027  995192 start.go:303] post-start completed in 134.648062ms
	I0830 22:19:05.401051  995192 fix.go:56] fixHost completed within 20.37520461s
	I0830 22:19:05.401079  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.404156  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404595  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.404629  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.404765  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.404960  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405138  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.405260  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.405463  995192 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:05.405917  995192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I0830 22:19:05.405930  995192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:05.536449  995192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433945.485000324
	
	I0830 22:19:05.536479  995192 fix.go:206] guest clock: 1693433945.485000324
	I0830 22:19:05.536490  995192 fix.go:219] Guest: 2023-08-30 22:19:05.485000324 +0000 UTC Remote: 2023-08-30 22:19:05.401056033 +0000 UTC m=+233.468479321 (delta=83.944291ms)
	I0830 22:19:05.536524  995192 fix.go:190] guest clock delta is within tolerance: 83.944291ms
	I0830 22:19:05.536535  995192 start.go:83] releasing machines lock for "default-k8s-diff-port-791007", held for 20.510742441s
	I0830 22:19:05.536569  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.536868  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:05.539651  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540017  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.540057  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.540196  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540737  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540911  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:19:05.540975  995192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:05.541036  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.541133  995192 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:05.541172  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:19:05.543846  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.543892  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544250  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544286  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:05.544317  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:05.544411  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544540  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:19:05.544627  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544707  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:19:05.544792  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544865  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:19:05.544926  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.544972  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:19:05.677442  995192 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:05.683243  995192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:05.832776  995192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:05.838924  995192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:05.839000  995192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:05.857231  995192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:05.857251  995192 start.go:466] detecting cgroup driver to use...
	I0830 22:19:05.857349  995192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:05.875107  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:05.888540  995192 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:05.888603  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:05.901129  995192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:05.914011  995192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:06.015763  995192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:06.144950  995192 docker.go:212] disabling docker service ...
	I0830 22:19:06.145052  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:06.159373  995192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:06.172560  995192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:06.279514  995192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:06.413719  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:06.427047  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:06.443765  995192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:06.443853  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.452621  995192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:06.452690  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.461365  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.470052  995192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:06.478685  995192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:06.487763  995192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:06.495483  995192 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:06.495551  995192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:06.508009  995192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:06.516397  995192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:06.615209  995192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:06.792388  995192 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:06.792466  995192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:06.798170  995192 start.go:534] Will wait 60s for crictl version
	I0830 22:19:06.798231  995192 ssh_runner.go:195] Run: which crictl
	I0830 22:19:06.801828  995192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:06.842351  995192 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:06.842459  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.898609  995192 ssh_runner.go:195] Run: crio --version
	I0830 22:19:06.962179  995192 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:06.963711  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetIP
	I0830 22:19:06.966803  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967189  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:19:06.967225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:19:06.967412  995192 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:06.972033  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:05.564313  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Start
	I0830 22:19:05.564511  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring networks are active...
	I0830 22:19:05.565235  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network default is active
	I0830 22:19:05.565567  995603 main.go:141] libmachine: (old-k8s-version-250163) Ensuring network mk-old-k8s-version-250163 is active
	I0830 22:19:05.565954  995603 main.go:141] libmachine: (old-k8s-version-250163) Getting domain xml...
	I0830 22:19:05.566644  995603 main.go:141] libmachine: (old-k8s-version-250163) Creating domain...
	I0830 22:19:06.869485  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting to get IP...
	I0830 22:19:06.870595  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:06.871071  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:06.871133  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:06.871046  996542 retry.go:31] will retry after 294.811471ms: waiting for machine to come up
	I0830 22:19:07.167657  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.168126  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.168172  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.168099  996542 retry.go:31] will retry after 376.474639ms: waiting for machine to come up
	I0830 22:19:07.546876  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.547389  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.547419  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.547354  996542 retry.go:31] will retry after 329.757182ms: waiting for machine to come up
	I0830 22:19:07.878995  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:07.879572  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:07.879601  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:07.879529  996542 retry.go:31] will retry after 567.335814ms: waiting for machine to come up
	I0830 22:19:08.448373  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.448996  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.449028  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.448958  996542 retry.go:31] will retry after 510.216093ms: waiting for machine to come up
	I0830 22:19:08.960855  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:08.961412  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:08.961451  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:08.961326  996542 retry.go:31] will retry after 688.575912ms: waiting for machine to come up
	I0830 22:19:09.651966  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:09.652379  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:09.652411  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:09.652346  996542 retry.go:31] will retry after 1.130912238s: waiting for machine to come up
	I0830 22:19:06.984632  995192 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:06.984698  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:07.020200  995192 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:07.020282  995192 ssh_runner.go:195] Run: which lz4
	I0830 22:19:07.024254  995192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:07.028470  995192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:07.028508  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0830 22:19:08.986852  995192 crio.go:444] Took 1.962647 seconds to copy over tarball
	I0830 22:19:08.986915  995192 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:10.784839  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:10.785424  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:10.785456  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:10.785355  996542 retry.go:31] will retry after 898.98114ms: waiting for machine to come up
	I0830 22:19:11.685890  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:11.686614  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:11.686646  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:11.686558  996542 retry.go:31] will retry after 1.621086004s: waiting for machine to come up
	I0830 22:19:13.310234  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:13.310696  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:13.310721  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:13.310630  996542 retry.go:31] will retry after 1.652651656s: waiting for machine to come up
	I0830 22:19:12.113071  995192 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.126115747s)
	I0830 22:19:12.113107  995192 crio.go:451] Took 3.126230 seconds to extract the tarball
	I0830 22:19:12.113120  995192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:12.156320  995192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:12.200547  995192 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:19:12.200573  995192 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:19:12.200652  995192 ssh_runner.go:195] Run: crio config
	I0830 22:19:12.273153  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:12.273180  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:12.273205  995192 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:12.273231  995192 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-791007 NodeName:default-k8s-diff-port-791007 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:19:12.273413  995192 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-791007"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:12.273497  995192 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-791007 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0830 22:19:12.273573  995192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:19:12.283536  995192 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:12.283609  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:12.292260  995192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0830 22:19:12.309407  995192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:12.325757  995192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0830 22:19:12.342664  995192 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:12.346459  995192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:12.358721  995192 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007 for IP: 192.168.61.104
	I0830 22:19:12.358797  995192 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:12.359010  995192 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:12.359066  995192 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:12.359147  995192 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.key
	I0830 22:19:12.359219  995192 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key.a202b4d9
	I0830 22:19:12.359255  995192 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key
	I0830 22:19:12.359363  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:12.359390  995192 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:12.359400  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:12.359424  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:12.359449  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:12.359471  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:12.359507  995192 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:12.360328  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:12.385275  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0830 22:19:12.410697  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:12.434240  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0830 22:19:12.457206  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:12.484695  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:12.507670  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:12.531114  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:12.554501  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:12.579425  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:12.603211  995192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:12.628506  995192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:12.645536  995192 ssh_runner.go:195] Run: openssl version
	I0830 22:19:12.650882  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:12.660449  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665173  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.665239  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:12.670785  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:12.681196  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:12.690775  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695204  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.695262  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:12.700668  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:12.710205  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:12.719691  995192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724744  995192 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.724803  995192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:12.730472  995192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:12.740194  995192 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:12.744773  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:12.750633  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:12.756228  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:12.762258  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:12.767895  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:12.773716  995192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:12.779716  995192 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-791007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-791007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:12.779849  995192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:12.779895  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:12.808983  995192 cri.go:89] found id: ""
	I0830 22:19:12.809055  995192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:12.818188  995192 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:12.818208  995192 kubeadm.go:636] restartCluster start
	I0830 22:19:12.818258  995192 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:12.829333  995192 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.830440  995192 kubeconfig.go:92] found "default-k8s-diff-port-791007" server: "https://192.168.61.104:8444"
	I0830 22:19:12.833172  995192 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:12.841419  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.841468  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.852072  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:12.852092  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:12.852135  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:12.862195  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.362894  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.362981  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.374932  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:13.862450  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:13.862558  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:13.874629  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.363249  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.363368  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.375071  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.862656  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:14.862767  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:14.874077  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.363282  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.363389  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.374762  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:15.862279  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:15.862375  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:15.873942  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.362457  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.362554  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.373922  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:16.862336  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:16.862415  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:16.873540  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:14.964585  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:14.965020  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:14.965042  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:14.964995  996542 retry.go:31] will retry after 1.89297354s: waiting for machine to come up
	I0830 22:19:16.859309  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:16.859825  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:16.859852  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:16.859777  996542 retry.go:31] will retry after 2.908196896s: waiting for machine to come up
	I0830 22:19:17.363243  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.363347  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.378177  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:17.862706  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:17.862785  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:17.877394  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.363052  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.363183  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.377397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:18.862918  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:18.862995  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:18.878397  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.362972  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.363052  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.374591  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.863153  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:19.863233  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:19.878572  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.362613  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.362703  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.374006  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:20.862535  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:20.862634  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:20.874066  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.362612  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.362721  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.375262  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:21.863011  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:21.863113  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:21.874498  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:19.771969  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:19.772453  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | unable to find current IP address of domain old-k8s-version-250163 in network mk-old-k8s-version-250163
	I0830 22:19:19.772482  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | I0830 22:19:19.772410  996542 retry.go:31] will retry after 3.967899631s: waiting for machine to come up
	I0830 22:19:23.743741  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744344  995603 main.go:141] libmachine: (old-k8s-version-250163) Found IP for machine: 192.168.39.10
	I0830 22:19:23.744371  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserving static IP address...
	I0830 22:19:23.744387  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has current primary IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.744827  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.744860  995603 main.go:141] libmachine: (old-k8s-version-250163) Reserved static IP address: 192.168.39.10
	I0830 22:19:23.744877  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | skip adding static IP to network mk-old-k8s-version-250163 - found existing host DHCP lease matching {name: "old-k8s-version-250163", mac: "52:54:00:ba:25:c9", ip: "192.168.39.10"}
	I0830 22:19:23.744904  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Getting to WaitForSSH function...
	I0830 22:19:23.744920  995603 main.go:141] libmachine: (old-k8s-version-250163) Waiting for SSH to be available...
	I0830 22:19:23.747285  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747642  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.747676  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.747864  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH client type: external
	I0830 22:19:23.747896  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa (-rw-------)
	I0830 22:19:23.747935  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:23.747954  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | About to run SSH command:
	I0830 22:19:23.747971  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | exit 0
	I0830 22:19:23.836434  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:23.837035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetConfigRaw
	I0830 22:19:23.837845  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:23.840648  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.841088  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.841433  995603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/config.json ...
	I0830 22:19:23.841663  995603 machine.go:88] provisioning docker machine ...
	I0830 22:19:23.841688  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:23.841895  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842049  995603 buildroot.go:166] provisioning hostname "old-k8s-version-250163"
	I0830 22:19:23.842069  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:23.842291  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.844953  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845376  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.845408  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.845678  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.845885  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.846186  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.846361  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.846839  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.846861  995603 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-250163 && echo "old-k8s-version-250163" | sudo tee /etc/hostname
	I0830 22:19:23.981507  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-250163
	
	I0830 22:19:23.981556  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:23.984891  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:23.985249  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:23.985369  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:23.985604  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.985811  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:23.986000  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:23.986199  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:23.986603  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:23.986620  995603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-250163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-250163/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-250163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:24.115894  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:24.115952  995603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:24.115985  995603 buildroot.go:174] setting up certificates
	I0830 22:19:24.115996  995603 provision.go:83] configureAuth start
	I0830 22:19:24.116014  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetMachineName
	I0830 22:19:24.116342  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:24.118887  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119266  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.119312  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.119572  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.122166  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122551  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.122590  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.122700  995603 provision.go:138] copyHostCerts
	I0830 22:19:24.122769  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:24.122793  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:24.122868  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:24.122989  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:24.123004  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:24.123035  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:24.123168  995603 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:24.123184  995603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:24.123217  995603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:24.123302  995603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-250163 san=[192.168.39.10 192.168.39.10 localhost 127.0.0.1 minikube old-k8s-version-250163]
	I0830 22:19:24.303093  995603 provision.go:172] copyRemoteCerts
	I0830 22:19:24.303156  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:24.303182  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.305900  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306173  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.306199  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.306352  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.306545  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.306728  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.306873  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.393858  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:24.418791  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:19:24.441090  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:24.462926  995603 provision.go:86] duration metric: configureAuth took 346.913079ms
	I0830 22:19:24.462952  995603 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:24.463136  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:19:24.463224  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.465978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466321  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.466357  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.466559  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.466785  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.466934  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.467035  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.467173  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.467657  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.467676  995603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:25.058077  994624 start.go:369] acquired machines lock for "no-preload-698195" in 53.768050843s
	I0830 22:19:25.058128  994624 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:25.058141  994624 fix.go:54] fixHost starting: 
	I0830 22:19:25.058564  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:25.058603  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:25.076580  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45105
	I0830 22:19:25.077082  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:25.077788  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:19:25.077824  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:25.078214  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:25.078418  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:25.078695  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:19:25.080411  994624 fix.go:102] recreateIfNeeded on no-preload-698195: state=Stopped err=<nil>
	I0830 22:19:25.080447  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	W0830 22:19:25.080636  994624 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:25.082566  994624 out.go:177] * Restarting existing kvm2 VM for "no-preload-698195" ...
	I0830 22:19:24.795523  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:24.795562  995603 machine.go:91] provisioned docker machine in 953.87669ms
	I0830 22:19:24.795575  995603 start.go:300] post-start starting for "old-k8s-version-250163" (driver="kvm2")
	I0830 22:19:24.795590  995603 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:24.795616  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:24.795984  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:24.796046  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.799136  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799534  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.799561  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.799797  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.799996  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.800210  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.800396  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:24.890335  995603 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:24.894780  995603 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:24.894807  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:24.894890  995603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:24.894986  995603 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:24.895110  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:24.907259  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:24.932802  995603 start.go:303] post-start completed in 137.211475ms
	I0830 22:19:24.932829  995603 fix.go:56] fixHost completed within 19.396077949s
	I0830 22:19:24.932858  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:24.935762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936118  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:24.936160  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:24.936310  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:24.936538  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936721  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:24.936918  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:24.937109  995603 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:24.937748  995603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0830 22:19:24.937767  995603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:25.057876  995603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433965.004650095
	
	I0830 22:19:25.057911  995603 fix.go:206] guest clock: 1693433965.004650095
	I0830 22:19:25.057924  995603 fix.go:219] Guest: 2023-08-30 22:19:25.004650095 +0000 UTC Remote: 2023-08-30 22:19:24.932833395 +0000 UTC m=+145.224486267 (delta=71.8167ms)
	I0830 22:19:25.057987  995603 fix.go:190] guest clock delta is within tolerance: 71.8167ms
	I0830 22:19:25.057998  995603 start.go:83] releasing machines lock for "old-k8s-version-250163", held for 19.521294969s
	I0830 22:19:25.058036  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.058351  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:25.061325  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061749  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.061782  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.061965  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062635  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:19:25.062921  995603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:25.062977  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.063084  995603 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:25.063119  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:19:25.065978  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066217  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066375  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066428  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066620  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066668  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:25.066784  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:19:25.066806  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:25.066829  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.066953  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:19:25.067142  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067206  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:19:25.067278  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.067389  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:19:25.181017  995603 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:25.188428  995603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:25.337310  995603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:25.346144  995603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:25.346231  995603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:25.368931  995603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:25.368966  995603 start.go:466] detecting cgroup driver to use...
	I0830 22:19:25.369048  995603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:25.383524  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:25.399296  995603 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:25.399365  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:25.416387  995603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:25.430426  995603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:25.552861  995603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:25.699278  995603 docker.go:212] disabling docker service ...
	I0830 22:19:25.699350  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:25.718108  995603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:25.736420  995603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:25.871165  995603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:25.993674  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:26.009215  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:26.027014  995603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0830 22:19:26.027122  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.038902  995603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:26.038985  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.051908  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.062635  995603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:26.073049  995603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:26.086514  995603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:26.098352  995603 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:26.098405  995603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:26.117326  995603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:26.129854  995603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:26.259656  995603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:26.476938  995603 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:26.477034  995603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:26.482773  995603 start.go:534] Will wait 60s for crictl version
	I0830 22:19:26.482841  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:26.486853  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:26.525498  995603 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:26.525595  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.585226  995603 ssh_runner.go:195] Run: crio --version
	I0830 22:19:26.641386  995603 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0830 22:19:22.362364  995192 api_server.go:166] Checking apiserver status ...
	I0830 22:19:22.362448  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:22.373701  995192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:22.842449  995192 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:22.842531  995192 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:22.842551  995192 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:22.842623  995192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:22.871557  995192 cri.go:89] found id: ""
	I0830 22:19:22.871624  995192 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:22.886295  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:22.894486  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:22.894549  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902556  995192 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:22.902578  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.017775  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.631493  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.831074  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.923222  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:23.994499  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:23.994583  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.007515  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:24.519195  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.019167  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:25.519068  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.019708  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.519664  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:26.547751  995192 api_server.go:72] duration metric: took 2.553248139s to wait for apiserver process to appear ...
	I0830 22:19:26.547794  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:26.547816  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:25.084008  994624 main.go:141] libmachine: (no-preload-698195) Calling .Start
	I0830 22:19:25.084189  994624 main.go:141] libmachine: (no-preload-698195) Ensuring networks are active...
	I0830 22:19:25.085011  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network default is active
	I0830 22:19:25.085319  994624 main.go:141] libmachine: (no-preload-698195) Ensuring network mk-no-preload-698195 is active
	I0830 22:19:25.085676  994624 main.go:141] libmachine: (no-preload-698195) Getting domain xml...
	I0830 22:19:25.086427  994624 main.go:141] libmachine: (no-preload-698195) Creating domain...
	I0830 22:19:26.443042  994624 main.go:141] libmachine: (no-preload-698195) Waiting to get IP...
	I0830 22:19:26.444179  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.444691  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.444784  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.444686  996676 retry.go:31] will retry after 208.17912ms: waiting for machine to come up
	I0830 22:19:26.654132  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.654621  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.654651  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.654581  996676 retry.go:31] will retry after 304.249592ms: waiting for machine to come up
	I0830 22:19:26.960205  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:26.960990  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:26.961014  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:26.960912  996676 retry.go:31] will retry after 342.108913ms: waiting for machine to come up
	I0830 22:19:27.304766  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.305661  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.305700  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.305602  996676 retry.go:31] will retry after 500.147687ms: waiting for machine to come up
	I0830 22:19:27.808375  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:27.808867  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:27.808884  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:27.808796  996676 retry.go:31] will retry after 562.543443ms: waiting for machine to come up
	I0830 22:19:28.373420  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:28.373912  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:28.373938  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:28.373863  996676 retry.go:31] will retry after 755.787662ms: waiting for machine to come up
	I0830 22:19:26.642985  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetIP
	I0830 22:19:26.646304  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646712  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:19:26.646773  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:19:26.646957  995603 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:26.652439  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:26.667339  995603 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 22:19:26.667418  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:26.703670  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:26.703750  995603 ssh_runner.go:195] Run: which lz4
	I0830 22:19:26.708087  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 22:19:26.712329  995603 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 22:19:26.712362  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0830 22:19:28.602303  995603 crio.go:444] Took 1.894253 seconds to copy over tarball
	I0830 22:19:28.602408  995603 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 22:19:30.838763  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.838807  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:30.838824  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:30.908950  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:30.908987  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:31.409372  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.420411  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.420480  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:31.909095  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:31.916778  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:31.916813  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:29.130983  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:29.131530  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:29.131565  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:29.131459  996676 retry.go:31] will retry after 951.657872ms: waiting for machine to come up
	I0830 22:19:30.084853  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:30.085280  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:30.085306  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:30.085247  996676 retry.go:31] will retry after 1.469099841s: waiting for machine to come up
	I0830 22:19:31.556432  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:31.556893  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:31.556918  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:31.556809  996676 retry.go:31] will retry after 1.217757948s: waiting for machine to come up
	I0830 22:19:32.775796  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:32.776120  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:32.776152  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:32.776080  996676 retry.go:31] will retry after 2.032727742s: waiting for machine to come up
	I0830 22:19:31.859924  995603 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.257478408s)
	I0830 22:19:31.859957  995603 crio.go:451] Took 3.257622 seconds to extract the tarball
	I0830 22:19:31.859970  995603 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 22:19:31.917027  995603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:31.965752  995603 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0830 22:19:31.965777  995603 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:31.965886  995603 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.965944  995603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.965980  995603 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.965879  995603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.966084  995603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.965878  995603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.965967  995603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:31.965901  995603 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968024  995603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:31.968045  995603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:31.968079  995603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:31.968186  995603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:31.968191  995603 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:31.968193  995603 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0830 22:19:31.968248  995603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:31.968766  995603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.140478  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.140975  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.157997  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0830 22:19:32.159468  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.159950  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.160033  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.161682  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.255481  995603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:32.261235  995603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0830 22:19:32.261291  995603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.261340  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.282724  995603 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0830 22:19:32.282781  995603 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.282854  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378268  995603 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0830 22:19:32.378372  995603 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0830 22:19:32.378417  995603 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0830 22:19:32.378507  995603 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.378551  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378377  995603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0830 22:19:32.378578  995603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0830 22:19:32.378591  995603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.378600  995603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.378295  995603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378632  995603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.378439  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378657  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.378624  995603 ssh_runner.go:195] Run: which crictl
	I0830 22:19:32.468864  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0830 22:19:32.468935  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0830 22:19:32.469002  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0830 22:19:32.469032  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0830 22:19:32.469123  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0830 22:19:32.469183  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0830 22:19:32.469184  995603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0830 22:19:32.563508  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0830 22:19:32.563630  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0830 22:19:32.586962  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0830 22:19:32.587044  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0830 22:19:32.587059  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0830 22:19:32.587115  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0830 22:19:32.587208  995603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.587265  995603 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0830 22:19:32.592221  995603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0830 22:19:32.592246  995603 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0830 22:19:32.592300  995603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0830 22:19:34.254194  995603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.661863162s)
	I0830 22:19:34.254235  995603 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0830 22:19:34.254281  995603 cache_images.go:92] LoadImages completed in 2.288490025s
	W0830 22:19:34.254418  995603 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0830 22:19:34.254514  995603 ssh_runner.go:195] Run: crio config
	I0830 22:19:34.338842  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.338876  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.338903  995603 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:19:34.338929  995603 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-250163 NodeName:old-k8s-version-250163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0830 22:19:34.339134  995603 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-250163"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-250163
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.10:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:19:34.339240  995603 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-250163 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:19:34.339313  995603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0830 22:19:34.348990  995603 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:19:34.349076  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:19:34.358084  995603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0830 22:19:34.376989  995603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:19:34.396552  995603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0830 22:19:34.416666  995603 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0830 22:19:34.421910  995603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:34.436393  995603 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163 for IP: 192.168.39.10
	I0830 22:19:34.436490  995603 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:19:34.436717  995603 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:19:34.436774  995603 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:19:34.436867  995603 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.key
	I0830 22:19:34.436944  995603 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key.713efbbe
	I0830 22:19:34.437006  995603 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key
	I0830 22:19:34.437140  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:19:34.437187  995603 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:19:34.437203  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:19:34.437249  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:19:34.437284  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:19:34.437320  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:19:34.437388  995603 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:34.438079  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:19:34.470943  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:19:34.503477  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:19:34.533783  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:19:34.562423  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:19:34.594418  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:19:34.625417  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:19:34.657444  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:19:34.689407  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:19:34.719004  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:19:34.745856  995603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:19:32.410110  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.418241  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.418269  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:32.910053  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:32.915839  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:32.915870  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.410086  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.488115  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.488161  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:33.909647  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:33.915252  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:33.915284  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.409978  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.418957  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:19:34.418995  995192 api_server.go:103] status: https://192.168.61.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:19:34.909561  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:19:34.925400  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:19:34.938760  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:19:34.938793  995192 api_server.go:131] duration metric: took 8.390990557s to wait for apiserver health ...
	I0830 22:19:34.938804  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:19:34.938813  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:34.941052  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:34.942805  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:34.967544  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:34.998450  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:35.012600  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:35.012681  995192 system_pods.go:61] "coredns-5dd5756b68-992p2" [83ad338b-0338-45c3-a5ed-f772d100046b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:35.012702  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [4ed4f652-47c4-4d79-b8a8-dd0cc778bce0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:19:35.012714  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [c01b9dfc-ad6f-4348-abf0-fde4a64bfa98] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:19:35.012732  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [94cbccaf-3d5a-480c-8ee0-b8af5030909d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:19:35.012748  995192 system_pods.go:61] "kube-proxy-vckmf" [03f05466-f99b-4803-9164-233bfb9e7bb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:19:35.012760  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [2c5e190d-c93b-400a-8538-e31cc2016cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:19:35.012774  995192 system_pods.go:61] "metrics-server-57f55c9bc5-p8pp2" [4eaff1be-4258-427b-a110-47dabbffecee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:19:35.012788  995192 system_pods.go:61] "storage-provisioner" [8db3da8b-8256-405d-8d9c-79fdb6da8ab2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:19:35.012800  995192 system_pods.go:74] duration metric: took 14.324835ms to wait for pod list to return data ...
	I0830 22:19:35.012814  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:35.024186  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:35.024216  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:35.024229  995192 node_conditions.go:105] duration metric: took 11.409776ms to run NodePressure ...
	I0830 22:19:35.024284  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:35.318824  995192 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324484  995192 kubeadm.go:787] kubelet initialised
	I0830 22:19:35.324512  995192 kubeadm.go:788] duration metric: took 5.656923ms waiting for restarted kubelet to initialise ...
	I0830 22:19:35.324525  995192 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:19:35.334137  995192 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:34.810276  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:34.810797  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:34.810836  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:34.810732  996676 retry.go:31] will retry after 2.550508742s: waiting for machine to come up
	I0830 22:19:37.364002  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:37.364550  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:37.364582  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:37.364489  996676 retry.go:31] will retry after 2.230782644s: waiting for machine to come up
	I0830 22:19:34.771235  995603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:19:34.787672  995603 ssh_runner.go:195] Run: openssl version
	I0830 22:19:34.793400  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:19:34.803208  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808108  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.808166  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:19:34.814296  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:19:34.824791  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:19:34.838527  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844726  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.844789  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:19:34.852442  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:19:34.862510  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:19:34.875456  995603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880581  995603 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.880702  995603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:19:34.886591  995603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:19:34.897133  995603 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:19:34.902292  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:19:34.908905  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:19:34.915276  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:19:34.921204  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:19:34.927878  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:19:34.934091  995603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:19:34.940851  995603 kubeadm.go:404] StartCluster: {Name:old-k8s-version-250163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-250163 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:19:34.940966  995603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:19:34.941036  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:34.978950  995603 cri.go:89] found id: ""
	I0830 22:19:34.979038  995603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:19:34.988290  995603 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:19:34.988324  995603 kubeadm.go:636] restartCluster start
	I0830 22:19:34.988403  995603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:19:34.998277  995603 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:34.999385  995603 kubeconfig.go:92] found "old-k8s-version-250163" server: "https://192.168.39.10:8443"
	I0830 22:19:35.002017  995603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:19:35.013903  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.013962  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.028780  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.028800  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.028845  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.043243  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:35.543986  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:35.544109  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:35.555939  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.044164  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.044259  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.055496  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:36.544110  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:36.544243  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:36.555999  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.043535  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.043628  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.055019  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.543435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:37.543546  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:37.558778  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.044367  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.044482  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.058777  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:38.543327  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:38.543431  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:38.555133  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.043720  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.043874  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.059955  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:39.543461  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:39.543625  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:39.558707  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:37.360241  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:39.363755  995192 pod_ready.go:102] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:40.357373  995192 pod_ready.go:92] pod "coredns-5dd5756b68-992p2" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:40.357396  995192 pod_ready.go:81] duration metric: took 5.023230161s waiting for pod "coredns-5dd5756b68-992p2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:40.357409  995192 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:39.597197  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:39.597650  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:39.597684  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:39.597603  996676 retry.go:31] will retry after 3.562835127s: waiting for machine to come up
	I0830 22:19:43.161572  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:43.162020  994624 main.go:141] libmachine: (no-preload-698195) DBG | unable to find current IP address of domain no-preload-698195 in network mk-no-preload-698195
	I0830 22:19:43.162054  994624 main.go:141] libmachine: (no-preload-698195) DBG | I0830 22:19:43.161973  996676 retry.go:31] will retry after 5.409514109s: waiting for machine to come up
	I0830 22:19:40.044014  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.044104  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.059377  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:40.543910  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:40.544012  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:40.555295  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.043380  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.043493  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.055443  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:41.544046  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:41.544121  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:41.555832  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.043785  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.043876  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.054809  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.543376  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:42.543463  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:42.554254  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.043435  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.043543  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.054734  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:43.544308  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:43.544418  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:43.555603  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.044211  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.044291  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.055403  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:44.544013  995603 api_server.go:166] Checking apiserver status ...
	I0830 22:19:44.544117  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:19:44.555197  995603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:19:42.378396  995192 pod_ready.go:102] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:42.881428  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.881456  995192 pod_ready.go:81] duration metric: took 2.524040213s waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.881467  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892688  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.892718  995192 pod_ready.go:81] duration metric: took 11.243576ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.892731  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898434  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.898463  995192 pod_ready.go:81] duration metric: took 5.721888ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.898476  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904261  995192 pod_ready.go:92] pod "kube-proxy-vckmf" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:42.904287  995192 pod_ready.go:81] duration metric: took 5.803127ms waiting for pod "kube-proxy-vckmf" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:42.904299  995192 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153736  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:19:43.153763  995192 pod_ready.go:81] duration metric: took 249.454932ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:43.153777  995192 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	I0830 22:19:45.462667  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:48.575718  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576172  994624 main.go:141] libmachine: (no-preload-698195) Found IP for machine: 192.168.72.28
	I0830 22:19:48.576206  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has current primary IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.576217  994624 main.go:141] libmachine: (no-preload-698195) Reserving static IP address...
	I0830 22:19:48.576671  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.576719  994624 main.go:141] libmachine: (no-preload-698195) Reserved static IP address: 192.168.72.28
	I0830 22:19:48.576754  994624 main.go:141] libmachine: (no-preload-698195) DBG | skip adding static IP to network mk-no-preload-698195 - found existing host DHCP lease matching {name: "no-preload-698195", mac: "52:54:00:5b:fc:d1", ip: "192.168.72.28"}
	I0830 22:19:48.576776  994624 main.go:141] libmachine: (no-preload-698195) DBG | Getting to WaitForSSH function...
	I0830 22:19:48.576792  994624 main.go:141] libmachine: (no-preload-698195) Waiting for SSH to be available...
	I0830 22:19:48.578953  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579261  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.579290  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.579398  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH client type: external
	I0830 22:19:48.579417  994624 main.go:141] libmachine: (no-preload-698195) DBG | Using SSH private key: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa (-rw-------)
	I0830 22:19:48.579451  994624 main.go:141] libmachine: (no-preload-698195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0830 22:19:48.579478  994624 main.go:141] libmachine: (no-preload-698195) DBG | About to run SSH command:
	I0830 22:19:48.579493  994624 main.go:141] libmachine: (no-preload-698195) DBG | exit 0
	I0830 22:19:48.679834  994624 main.go:141] libmachine: (no-preload-698195) DBG | SSH cmd err, output: <nil>: 
	I0830 22:19:48.680237  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetConfigRaw
	I0830 22:19:48.681064  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.683388  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.683844  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.683884  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.684153  994624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/config.json ...
	I0830 22:19:48.684435  994624 machine.go:88] provisioning docker machine ...
	I0830 22:19:48.684462  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:48.684708  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.684851  994624 buildroot.go:166] provisioning hostname "no-preload-698195"
	I0830 22:19:48.684883  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.685013  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.687508  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.687975  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.688018  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.688198  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.688413  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688599  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.688830  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.689061  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.689695  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.689718  994624 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-698195 && echo "no-preload-698195" | sudo tee /etc/hostname
	I0830 22:19:45.014985  995603 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:19:45.015030  995603 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:19:45.015045  995603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:19:45.015102  995603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:19:45.049952  995603 cri.go:89] found id: ""
	I0830 22:19:45.050039  995603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:19:45.065202  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:19:45.074198  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:19:45.074330  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083407  995603 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:19:45.083438  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:45.211527  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.256339  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.044735651s)
	I0830 22:19:46.256389  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.469714  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.542945  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:46.644533  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:19:46.644632  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:46.659432  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.182415  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:47.682613  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.182661  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:19:48.206336  995603 api_server.go:72] duration metric: took 1.561801361s to wait for apiserver process to appear ...
	I0830 22:19:48.206374  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:19:48.206399  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:50.136893  994705 start.go:369] acquired machines lock for "embed-certs-208903" in 1m0.108561967s
	I0830 22:19:50.136941  994705 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:19:50.136952  994705 fix.go:54] fixHost starting: 
	I0830 22:19:50.137347  994705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:19:50.137386  994705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:19:50.156678  994705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0830 22:19:50.157148  994705 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:19:50.157739  994705 main.go:141] libmachine: Using API Version  1
	I0830 22:19:50.157765  994705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:19:50.158103  994705 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:19:50.158283  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.158445  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetState
	I0830 22:19:50.160098  994705 fix.go:102] recreateIfNeeded on embed-certs-208903: state=Running err=<nil>
	W0830 22:19:50.160115  994705 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:19:50.162162  994705 out.go:177] * Updating the running kvm2 "embed-certs-208903" VM ...
	I0830 22:19:50.163634  994705 machine.go:88] provisioning docker machine ...
	I0830 22:19:50.163663  994705 main.go:141] libmachine: (embed-certs-208903) Calling .DriverName
	I0830 22:19:50.163906  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164077  994705 buildroot.go:166] provisioning hostname "embed-certs-208903"
	I0830 22:19:50.164104  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.164288  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.166831  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167198  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.167234  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.167371  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.167561  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167731  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.167902  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.168108  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.168592  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.168610  994705 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-208903 && echo "embed-certs-208903" | sudo tee /etc/hostname
	I0830 22:19:50.306738  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-208903
	
	I0830 22:19:50.306772  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.309523  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.309929  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.309974  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.310182  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.310349  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310638  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.310845  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.311027  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.311610  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.311644  994705 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-208903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-208903/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-208903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:50.433972  994705 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:50.434005  994705 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:50.434045  994705 buildroot.go:174] setting up certificates
	I0830 22:19:50.434057  994705 provision.go:83] configureAuth start
	I0830 22:19:50.434069  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetMachineName
	I0830 22:19:50.434388  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetIP
	I0830 22:19:50.437450  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.437883  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.437916  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.438115  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.440654  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.441059  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.441213  994705 provision.go:138] copyHostCerts
	I0830 22:19:50.441271  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:50.441283  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:50.441352  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:50.441453  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:50.441462  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:50.441481  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:50.441563  994705 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:50.441575  994705 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:50.441606  994705 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:50.441684  994705 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.embed-certs-208903 san=[192.168.50.159 192.168.50.159 localhost 127.0.0.1 minikube embed-certs-208903]
	I0830 22:19:50.721978  994705 provision.go:172] copyRemoteCerts
	I0830 22:19:50.722039  994705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:50.722072  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.724893  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725257  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.725289  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.725571  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.725799  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.726014  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.726181  994705 sshutil.go:53] new ssh client: &{IP:192.168.50.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/embed-certs-208903/id_rsa Username:docker}
	I0830 22:19:50.817217  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:50.843335  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:50.869233  994705 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:19:50.897508  994705 provision.go:86] duration metric: configureAuth took 463.432948ms
	I0830 22:19:50.897544  994705 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:50.897804  994705 config.go:182] Loaded profile config "embed-certs-208903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:50.897904  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHHostname
	I0830 22:19:50.900633  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901014  994705 main.go:141] libmachine: (embed-certs-208903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:50:90", ip: ""} in network mk-embed-certs-208903: {Iface:virbr2 ExpiryTime:2023-08-30 23:18:38 +0000 UTC Type:0 Mac:52:54:00:07:50:90 Iaid: IPaddr:192.168.50.159 Prefix:24 Hostname:embed-certs-208903 Clientid:01:52:54:00:07:50:90}
	I0830 22:19:50.901040  994705 main.go:141] libmachine: (embed-certs-208903) DBG | domain embed-certs-208903 has defined IP address 192.168.50.159 and MAC address 52:54:00:07:50:90 in network mk-embed-certs-208903
	I0830 22:19:50.901210  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHPort
	I0830 22:19:50.901404  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901547  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHKeyPath
	I0830 22:19:50.901680  994705 main.go:141] libmachine: (embed-certs-208903) Calling .GetSSHUsername
	I0830 22:19:50.901875  994705 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.902287  994705 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.50.159 22 <nil> <nil>}
	I0830 22:19:50.902310  994705 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:51.128816  994705 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.128855  994705 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0830 22:19:51.128866  994705 machine.go:91] provisioned docker machine in 965.212906ms
	I0830 22:19:51.128900  994705 fix.go:56] fixHost completed within 991.948899ms
	I0830 22:19:51.128906  994705 start.go:83] releasing machines lock for "embed-certs-208903", held for 991.990648ms
	W0830 22:19:51.129050  994705 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p embed-certs-208903" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0830 22:19:51.131823  994705 out.go:177] 
	W0830 22:19:51.133957  994705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0830 22:19:51.133985  994705 out.go:239] * 
	W0830 22:19:51.134788  994705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:19:51.136212  994705 out.go:177] 
	I0830 22:19:48.842387  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-698195
	
	I0830 22:19:48.842438  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:48.845727  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.846100  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.846140  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.846429  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:48.846658  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.846856  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:48.846991  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:48.847159  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:48.847578  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:48.847601  994624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-698195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-698195/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-698195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:19:48.994130  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:19:48.994176  994624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17114-955377/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-955377/.minikube}
	I0830 22:19:48.994211  994624 buildroot.go:174] setting up certificates
	I0830 22:19:48.994244  994624 provision.go:83] configureAuth start
	I0830 22:19:48.994270  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetMachineName
	I0830 22:19:48.994612  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:48.997772  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.998170  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:48.998208  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:48.998416  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.001089  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.001466  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.001498  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.001639  994624 provision.go:138] copyHostCerts
	I0830 22:19:49.001702  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem, removing ...
	I0830 22:19:49.001733  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem
	I0830 22:19:49.001808  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/ca.pem (1078 bytes)
	I0830 22:19:49.001927  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem, removing ...
	I0830 22:19:49.001937  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem
	I0830 22:19:49.001967  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/cert.pem (1123 bytes)
	I0830 22:19:49.002042  994624 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem, removing ...
	I0830 22:19:49.002057  994624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem
	I0830 22:19:49.002085  994624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-955377/.minikube/key.pem (1679 bytes)
	I0830 22:19:49.002169  994624 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem org=jenkins.no-preload-698195 san=[192.168.72.28 192.168.72.28 localhost 127.0.0.1 minikube no-preload-698195]
	I0830 22:19:49.376465  994624 provision.go:172] copyRemoteCerts
	I0830 22:19:49.376534  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:19:49.376565  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.379932  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.380313  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.380354  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.380486  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.380738  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.380949  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.381109  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:49.474102  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0830 22:19:49.496563  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 22:19:49.518034  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:19:49.539392  994624 provision.go:86] duration metric: configureAuth took 545.126518ms
	I0830 22:19:49.539419  994624 buildroot.go:189] setting minikube options for container-runtime
	I0830 22:19:49.539623  994624 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:19:49.539719  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.542336  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.542665  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.542738  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.542839  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.543026  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.543217  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.543341  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.543459  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:49.543864  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:49.543882  994624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:19:49.869021  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:19:49.869051  994624 machine.go:91] provisioned docker machine in 1.184598655s
	I0830 22:19:49.869065  994624 start.go:300] post-start starting for "no-preload-698195" (driver="kvm2")
	I0830 22:19:49.869079  994624 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:19:49.869110  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:49.869444  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:19:49.869481  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:49.871931  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.872288  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:49.872333  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:49.872502  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:49.872706  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:49.872888  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:49.873027  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:49.969286  994624 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:19:49.973513  994624 info.go:137] Remote host: Buildroot 2021.02.12
	I0830 22:19:49.973532  994624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/addons for local assets ...
	I0830 22:19:49.973598  994624 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-955377/.minikube/files for local assets ...
	I0830 22:19:49.973671  994624 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem -> 9626212.pem in /etc/ssl/certs
	I0830 22:19:49.973768  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:19:49.982880  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:19:50.006097  994624 start.go:303] post-start completed in 137.016363ms
	I0830 22:19:50.006124  994624 fix.go:56] fixHost completed within 24.947983055s
	I0830 22:19:50.006150  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.008513  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.008880  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.008908  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.009134  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.009371  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.009560  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.009755  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.009933  994624 main.go:141] libmachine: Using SSH client type: native
	I0830 22:19:50.010372  994624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 192.168.72.28 22 <nil> <nil>}
	I0830 22:19:50.010402  994624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0830 22:19:50.136709  994624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693433990.121404659
	
	I0830 22:19:50.136738  994624 fix.go:206] guest clock: 1693433990.121404659
	I0830 22:19:50.136748  994624 fix.go:219] Guest: 2023-08-30 22:19:50.121404659 +0000 UTC Remote: 2023-08-30 22:19:50.006128322 +0000 UTC m=+361.306139641 (delta=115.276337ms)
	I0830 22:19:50.136792  994624 fix.go:190] guest clock delta is within tolerance: 115.276337ms
	I0830 22:19:50.136800  994624 start.go:83] releasing machines lock for "no-preload-698195", held for 25.078698183s
	I0830 22:19:50.136834  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.137143  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:50.139834  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.140214  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.140249  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.140387  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.140890  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.141088  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:19:50.141191  994624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:19:50.141243  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.141312  994624 ssh_runner.go:195] Run: cat /version.json
	I0830 22:19:50.141335  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:19:50.144030  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144283  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144434  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.144462  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144598  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.144736  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:50.144768  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:50.144791  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.144912  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:19:50.144987  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.145152  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:19:50.145161  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:50.145318  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:19:50.145433  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:19:50.257719  994624 ssh_runner.go:195] Run: systemctl --version
	I0830 22:19:50.263507  994624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:19:50.411574  994624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0830 22:19:50.418796  994624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0830 22:19:50.418872  994624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:19:50.435922  994624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:19:50.435943  994624 start.go:466] detecting cgroup driver to use...
	I0830 22:19:50.436022  994624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:19:50.450969  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:19:50.463538  994624 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:19:50.463596  994624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:19:50.475797  994624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:19:50.488143  994624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:19:50.586327  994624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:19:50.697497  994624 docker.go:212] disabling docker service ...
	I0830 22:19:50.697587  994624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:19:50.712369  994624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:19:50.726039  994624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:19:50.840596  994624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:19:50.967799  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:19:50.984629  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:19:51.006331  994624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:19:51.006404  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.017150  994624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:19:51.017241  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.028714  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.040075  994624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:19:51.054520  994624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:19:51.067179  994624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:19:51.077610  994624 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0830 22:19:51.077685  994624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0830 22:19:51.093337  994624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:19:51.104110  994624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:19:51.243534  994624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:19:51.455149  994624 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:19:51.455232  994624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:19:51.462110  994624 start.go:534] Will wait 60s for crictl version
	I0830 22:19:51.462185  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:51.468872  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:19:51.509838  994624 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0830 22:19:51.509924  994624 ssh_runner.go:195] Run: crio --version
	I0830 22:19:51.562065  994624 ssh_runner.go:195] Run: crio --version
	I0830 22:19:51.630813  994624 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0830 22:19:47.961668  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:50.461541  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:51.632256  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetIP
	I0830 22:19:51.636020  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:51.636430  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:19:51.636458  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:19:51.636633  994624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0830 22:19:51.641003  994624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:19:51.655539  994624 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:19:51.655595  994624 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:19:51.691423  994624 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0830 22:19:51.691455  994624 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 22:19:51.691508  994624 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.691795  994624 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.691800  994624 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.691932  994624 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.692015  994624 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.692204  994624 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.692383  994624 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0830 22:19:51.693156  994624 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.693256  994624 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.693294  994624 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.693393  994624 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.693613  994624 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.693700  994624 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0830 22:19:51.693767  994624 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.694704  994624 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.695502  994624 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.858227  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.862141  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:51.862588  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:51.864659  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:51.872937  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0830 22:19:51.885087  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:51.912710  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:51.970615  994624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:51.978831  994624 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0830 22:19:51.978883  994624 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:51.978930  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.004057  994624 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0830 22:19:52.004112  994624 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:52.004153  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.031261  994624 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0830 22:19:52.031330  994624 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:52.031350  994624 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0830 22:19:52.031393  994624 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:52.031456  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.031394  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168753  994624 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0830 22:19:52.168817  994624 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:52.168842  994624 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0830 22:19:52.168760  994624 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0830 22:19:52.168882  994624 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:52.168906  994624 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:52.168931  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168944  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168948  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0830 22:19:52.168877  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:19:52.168988  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0830 22:19:52.169048  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0830 22:19:52.169156  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0830 22:19:52.218220  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0830 22:19:52.218353  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.235432  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0830 22:19:52.235565  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0830 22:19:52.235575  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0830 22:19:52.235692  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:52.246243  994624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:19:52.246437  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0830 22:19:52.246550  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:52.260976  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0830 22:19:52.261024  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0830 22:19:52.261041  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.261090  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0830 22:19:52.261090  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:19:52.262450  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0830 22:19:52.316437  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0830 22:19:52.316556  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:19:52.316709  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0830 22:19:52.316807  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:19:52.330026  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0830 22:19:52.330185  994624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0830 22:19:52.330318  994624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:19:53.207917  995603 api_server.go:269] stopped: https://192.168.39.10:8443/healthz: Get "https://192.168.39.10:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0830 22:19:53.207968  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:54.224442  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:19:54.224482  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:19:54.724967  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:54.732845  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0830 22:19:54.732880  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0830 22:19:55.224677  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:55.231265  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0830 22:19:55.231302  995603 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0830 22:19:55.725325  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:19:55.731785  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0830 22:19:55.739996  995603 api_server.go:141] control plane version: v1.16.0
	I0830 22:19:55.740025  995603 api_server.go:131] duration metric: took 7.533643458s to wait for apiserver health ...
	I0830 22:19:55.740037  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:19:55.740046  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:19:55.742083  995603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:19:52.462806  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:54.462856  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:56.962847  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:19:55.697808  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (3.436622341s)
	I0830 22:19:55.697847  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0830 22:19:55.697882  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1: (3.381312107s)
	I0830 22:19:55.697895  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0830 22:19:55.697927  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (3.436796784s)
	I0830 22:19:55.697959  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0830 22:19:55.697985  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1: (3.381155963s)
	I0830 22:19:55.698014  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0830 22:19:55.697989  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:55.698035  994624 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.367694611s)
	I0830 22:19:55.698065  994624 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0830 22:19:55.698072  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0830 22:19:57.158231  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.460131868s)
	I0830 22:19:57.158266  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0830 22:19:57.158302  994624 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:57.158371  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0830 22:19:55.743724  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:19:55.755829  995603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:19:55.777604  995603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:19:55.792182  995603 system_pods.go:59] 8 kube-system pods found
	I0830 22:19:55.792221  995603 system_pods.go:61] "coredns-5644d7b6d9-872nn" [acd3b375-2486-48c3-9032-6386a091128a] Running
	I0830 22:19:55.792232  995603 system_pods.go:61] "coredns-5644d7b6d9-lqn5v" [48a574c1-b546-4060-9aba-1e2bcdaf7541] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:19:55.792240  995603 system_pods.go:61] "etcd-old-k8s-version-250163" [8d4eb3c4-a10b-4803-a1cd-28199081480d] Running
	I0830 22:19:55.792247  995603 system_pods.go:61] "kube-apiserver-old-k8s-version-250163" [c2cb0944-0836-4419-9bcf-8b6ddcb8de4f] Running
	I0830 22:19:55.792253  995603 system_pods.go:61] "kube-controller-manager-old-k8s-version-250163" [953d90e1-21ec-47a8-916a-9641616443a3] Running
	I0830 22:19:55.792259  995603 system_pods.go:61] "kube-proxy-qg82w" [58c1bd37-de42-46db-8337-cad3969dbbe3] Running
	I0830 22:19:55.792265  995603 system_pods.go:61] "kube-scheduler-old-k8s-version-250163" [ead115ca-3faa-457a-a29d-6de753bf53ab] Running
	I0830 22:19:55.792271  995603 system_pods.go:61] "storage-provisioner" [e481c13c-17b5-4a76-8f56-01decf4d2dde] Running
	I0830 22:19:55.792278  995603 system_pods.go:74] duration metric: took 14.654143ms to wait for pod list to return data ...
	I0830 22:19:55.792291  995603 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:19:55.800500  995603 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:19:55.800529  995603 node_conditions.go:123] node cpu capacity is 2
	I0830 22:19:55.800541  995603 node_conditions.go:105] duration metric: took 8.245305ms to run NodePressure ...
	I0830 22:19:55.800572  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:19:56.165598  995603 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:19:56.173177  995603 retry.go:31] will retry after 155.771258ms: kubelet not initialised
	I0830 22:19:56.335243  995603 retry.go:31] will retry after 435.88083ms: kubelet not initialised
	I0830 22:19:56.900108  995603 retry.go:31] will retry after 318.649581ms: kubelet not initialised
	I0830 22:19:57.226618  995603 retry.go:31] will retry after 906.607144ms: kubelet not initialised
	I0830 22:19:58.169644  995603 retry.go:31] will retry after 1.480507319s: kubelet not initialised
	I0830 22:19:59.662899  995603 retry.go:31] will retry after 1.43965579s: kubelet not initialised
	I0830 22:19:59.462944  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:01.463843  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:01.109412  995603 retry.go:31] will retry after 2.769965791s: kubelet not initialised
	I0830 22:20:03.884087  995603 retry.go:31] will retry after 5.524462984s: kubelet not initialised
	I0830 22:20:03.962393  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:06.463083  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:03.920494  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (6.762089682s)
	I0830 22:20:03.920528  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0830 22:20:03.920563  994624 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:20:03.920618  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0830 22:20:05.471647  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.551002795s)
	I0830 22:20:05.471696  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0830 22:20:05.471725  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:20:05.471808  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0830 22:20:07.437922  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (1.966087689s)
	I0830 22:20:07.437952  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0830 22:20:07.437986  994624 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:20:07.438046  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0830 22:20:09.418426  995603 retry.go:31] will retry after 8.161662984s: kubelet not initialised
	I0830 22:20:08.961616  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:10.962062  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:09.894897  994624 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (2.456819743s)
	I0830 22:20:09.894932  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0830 22:20:09.895001  994624 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:20:09.895072  994624 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0830 22:20:10.848591  994624 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17114-955377/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0830 22:20:10.848635  994624 cache_images.go:123] Successfully loaded all cached images
	I0830 22:20:10.848641  994624 cache_images.go:92] LoadImages completed in 19.157171696s
	I0830 22:20:10.848726  994624 ssh_runner.go:195] Run: crio config
	I0830 22:20:10.912483  994624 cni.go:84] Creating CNI manager for ""
	I0830 22:20:10.912514  994624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:20:10.912545  994624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:20:10.912574  994624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.28 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-698195 NodeName:no-preload-698195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:20:10.912729  994624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-698195"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:20:10.912793  994624 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-698195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-698195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:20:10.912850  994624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:20:10.922383  994624 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:20:10.922470  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:20:10.931904  994624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0830 22:20:10.947603  994624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:20:10.963835  994624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0830 22:20:10.982645  994624 ssh_runner.go:195] Run: grep 192.168.72.28	control-plane.minikube.internal$ /etc/hosts
	I0830 22:20:10.986493  994624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:20:10.999967  994624 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195 for IP: 192.168.72.28
	I0830 22:20:11.000000  994624 certs.go:190] acquiring lock for shared ca certs: {Name:mk2e67e13d79d15c3d002a96b2a7f288fae16325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:11.000190  994624 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key
	I0830 22:20:11.000252  994624 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key
	I0830 22:20:11.000348  994624 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.key
	I0830 22:20:11.000455  994624 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.key.f951a290
	I0830 22:20:11.000518  994624 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.key
	I0830 22:20:11.000668  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem (1338 bytes)
	W0830 22:20:11.000712  994624 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621_empty.pem, impossibly tiny 0 bytes
	I0830 22:20:11.000728  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca-key.pem (1679 bytes)
	I0830 22:20:11.000844  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/ca.pem (1078 bytes)
	I0830 22:20:11.000881  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:20:11.000917  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/certs/home/jenkins/minikube-integration/17114-955377/.minikube/certs/key.pem (1679 bytes)
	I0830 22:20:11.000978  994624 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem (1708 bytes)
	I0830 22:20:11.001876  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:20:11.025256  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:20:11.048414  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:20:11.072696  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:20:11.097029  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:20:11.123653  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:20:11.152564  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:20:11.180885  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0830 22:20:11.204194  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/ssl/certs/9626212.pem --> /usr/share/ca-certificates/9626212.pem (1708 bytes)
	I0830 22:20:11.227365  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:20:11.249804  994624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-955377/.minikube/certs/962621.pem --> /usr/share/ca-certificates/962621.pem (1338 bytes)
	I0830 22:20:11.272563  994624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:20:11.289225  994624 ssh_runner.go:195] Run: openssl version
	I0830 22:20:11.295235  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:20:11.304745  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.309554  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:10 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.309615  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:20:11.314775  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:20:11.327372  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/962621.pem && ln -fs /usr/share/ca-certificates/962621.pem /etc/ssl/certs/962621.pem"
	I0830 22:20:11.338944  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.344731  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:18 /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.344797  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/962621.pem
	I0830 22:20:11.350242  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/962621.pem /etc/ssl/certs/51391683.0"
	I0830 22:20:11.359913  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9626212.pem && ln -fs /usr/share/ca-certificates/9626212.pem /etc/ssl/certs/9626212.pem"
	I0830 22:20:11.369367  994624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.373467  994624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:18 /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.373511  994624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9626212.pem
	I0830 22:20:11.378731  994624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9626212.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:20:11.387877  994624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:20:11.392496  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 22:20:11.398057  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 22:20:11.403555  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 22:20:11.409343  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 22:20:11.414914  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 22:20:11.420465  994624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 22:20:11.425887  994624 kubeadm.go:404] StartCluster: {Name:no-preload-698195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-698195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:20:11.425988  994624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:20:11.426031  994624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:20:11.458215  994624 cri.go:89] found id: ""
	I0830 22:20:11.458307  994624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:20:11.468981  994624 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 22:20:11.469010  994624 kubeadm.go:636] restartCluster start
	I0830 22:20:11.469068  994624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 22:20:11.478113  994624 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:11.479707  994624 kubeconfig.go:92] found "no-preload-698195" server: "https://192.168.72.28:8443"
	I0830 22:20:11.483097  994624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 22:20:11.492068  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:11.492123  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:11.502752  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:11.502766  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:11.502803  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:11.514139  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:12.014881  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:12.014982  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:12.027078  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:12.514591  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:12.514686  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:12.529329  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.014971  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:13.015068  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:13.026874  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.514310  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:13.514395  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:13.526406  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:13.461372  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:15.961535  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:14.014646  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:14.014750  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:14.026467  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:14.515116  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:14.515212  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:14.527110  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:15.014622  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:15.014713  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:15.026083  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:15.515205  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:15.515304  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:15.530248  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:16.014368  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:16.014472  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:16.025785  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:16.514315  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:16.514390  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:16.525823  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.014305  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:17.014410  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:17.025657  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.515255  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:17.515331  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:17.527967  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:18.014524  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:18.014603  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:18.025912  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:18.514454  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:18.514533  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:18.526034  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:17.586022  995603 retry.go:31] will retry after 7.910874514s: kubelet not initialised
	I0830 22:20:18.460574  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:20.460727  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:19.014477  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:19.014563  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:19.025688  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:19.514231  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:19.514318  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:19.526253  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:20.014551  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:20.014632  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:20.026223  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:20.515044  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:20.515142  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:20.526336  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:21.014933  994624 api_server.go:166] Checking apiserver status ...
	I0830 22:20:21.015017  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0830 22:20:21.026315  994624 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0830 22:20:21.492708  994624 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0830 22:20:21.492739  994624 kubeadm.go:1128] stopping kube-system containers ...
	I0830 22:20:21.492755  994624 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 22:20:21.492837  994624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:20:21.528882  994624 cri.go:89] found id: ""
	I0830 22:20:21.528979  994624 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 22:20:21.545258  994624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:20:21.554325  994624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:20:21.554387  994624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:20:21.563086  994624 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 22:20:21.563121  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:21.688507  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.342362  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.552586  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.618512  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:22.699936  994624 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:20:22.700029  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:22.715983  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:23.231090  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:23.730985  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:22.462833  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:24.462913  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:26.960795  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:24.230937  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:24.730685  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:25.230888  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:20:25.256876  994624 api_server.go:72] duration metric: took 2.556939469s to wait for apiserver process to appear ...
	I0830 22:20:25.256907  994624 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:20:25.256929  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:25.502804  995603 retry.go:31] will retry after 19.65596925s: kubelet not initialised
	I0830 22:20:28.908329  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:20:28.908366  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:20:28.908382  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:28.973483  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 22:20:28.973534  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 22:20:29.474026  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:29.480796  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:20:29.480850  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:20:29.974406  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:29.981421  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 22:20:29.981453  994624 api_server.go:103] status: https://192.168.72.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 22:20:30.474452  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:20:30.479311  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0830 22:20:30.490550  994624 api_server.go:141] control plane version: v1.28.1
	I0830 22:20:30.490581  994624 api_server.go:131] duration metric: took 5.233664737s to wait for apiserver health ...
	I0830 22:20:30.490621  994624 cni.go:84] Creating CNI manager for ""
	I0830 22:20:30.490634  994624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:20:30.492834  994624 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:20:28.962577  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:31.461661  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:30.494469  994624 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:20:30.508611  994624 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:20:30.536470  994624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:20:30.547285  994624 system_pods.go:59] 8 kube-system pods found
	I0830 22:20:30.547321  994624 system_pods.go:61] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0830 22:20:30.547339  994624 system_pods.go:61] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 22:20:30.547352  994624 system_pods.go:61] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 22:20:30.547361  994624 system_pods.go:61] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 22:20:30.547369  994624 system_pods.go:61] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0830 22:20:30.547379  994624 system_pods.go:61] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 22:20:30.547391  994624 system_pods.go:61] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:20:30.547405  994624 system_pods.go:61] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:20:30.547416  994624 system_pods.go:74] duration metric: took 10.921869ms to wait for pod list to return data ...
	I0830 22:20:30.547428  994624 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:20:30.550787  994624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:20:30.550816  994624 node_conditions.go:123] node cpu capacity is 2
	I0830 22:20:30.550828  994624 node_conditions.go:105] duration metric: took 3.391486ms to run NodePressure ...
	I0830 22:20:30.550856  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 22:20:30.786117  994624 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 22:20:30.793653  994624 kubeadm.go:787] kubelet initialised
	I0830 22:20:30.793680  994624 kubeadm.go:788] duration metric: took 7.533543ms waiting for restarted kubelet to initialise ...
	I0830 22:20:30.793694  994624 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:30.800474  994624 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.808844  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.808869  994624 pod_ready.go:81] duration metric: took 8.371156ms waiting for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.808879  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.808888  994624 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.823461  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "etcd-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.823487  994624 pod_ready.go:81] duration metric: took 14.590789ms waiting for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.823497  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "etcd-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.823504  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.834123  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-apiserver-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.834150  994624 pod_ready.go:81] duration metric: took 10.63758ms waiting for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.834158  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-apiserver-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.834164  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:30.951589  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.951620  994624 pod_ready.go:81] duration metric: took 117.448834ms waiting for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:30.951628  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:30.951635  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:31.343471  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-proxy-5fjvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.343497  994624 pod_ready.go:81] duration metric: took 391.855831ms waiting for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:31.343506  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-proxy-5fjvd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.343512  994624 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:31.741491  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "kube-scheduler-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.741527  994624 pod_ready.go:81] duration metric: took 398.007277ms waiting for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:31.741539  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "kube-scheduler-no-preload-698195" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:31.741555  994624 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:32.141918  994624 pod_ready.go:97] node "no-preload-698195" hosting pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:32.141952  994624 pod_ready.go:81] duration metric: took 400.379332ms waiting for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	E0830 22:20:32.141961  994624 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-698195" hosting pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:32.141969  994624 pod_ready.go:38] duration metric: took 1.348263054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:32.141987  994624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:20:32.153800  994624 ops.go:34] apiserver oom_adj: -16
	I0830 22:20:32.153828  994624 kubeadm.go:640] restartCluster took 20.684809572s
	I0830 22:20:32.153848  994624 kubeadm.go:406] StartCluster complete in 20.727972693s
	I0830 22:20:32.153868  994624 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:32.153955  994624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:20:32.155765  994624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:20:32.156054  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:20:32.156162  994624 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:20:32.156265  994624 addons.go:69] Setting storage-provisioner=true in profile "no-preload-698195"
	I0830 22:20:32.156285  994624 addons.go:231] Setting addon storage-provisioner=true in "no-preload-698195"
	I0830 22:20:32.156288  994624 addons.go:69] Setting default-storageclass=true in profile "no-preload-698195"
	I0830 22:20:32.156307  994624 addons.go:69] Setting metrics-server=true in profile "no-preload-698195"
	I0830 22:20:32.156344  994624 addons.go:231] Setting addon metrics-server=true in "no-preload-698195"
	I0830 22:20:32.156318  994624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-698195"
	I0830 22:20:32.156396  994624 config.go:182] Loaded profile config "no-preload-698195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	W0830 22:20:32.156293  994624 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:20:32.156512  994624 host.go:66] Checking if "no-preload-698195" exists ...
	W0830 22:20:32.156358  994624 addons.go:240] addon metrics-server should already be in state true
	I0830 22:20:32.156570  994624 host.go:66] Checking if "no-preload-698195" exists ...
	I0830 22:20:32.156821  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156847  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.156849  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156867  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.156948  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.156961  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.165443  994624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-698195" context rescaled to 1 replicas
	I0830 22:20:32.165497  994624 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.28 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:20:32.167715  994624 out.go:177] * Verifying Kubernetes components...
	I0830 22:20:32.169310  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:20:32.176341  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0830 22:20:32.176876  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0830 22:20:32.177070  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0830 22:20:32.177253  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177447  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177562  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.177829  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.177856  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.178014  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.178032  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.178387  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179460  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179499  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.179517  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.179897  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.179957  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.179996  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.180272  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.180293  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.180423  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.201009  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I0830 22:20:32.201548  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.201926  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0830 22:20:32.202180  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.202200  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.202304  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.202785  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.202842  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.202865  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.203052  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.203202  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.203391  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.204424  994624 addons.go:231] Setting addon default-storageclass=true in "no-preload-698195"
	W0830 22:20:32.204450  994624 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:20:32.204491  994624 host.go:66] Checking if "no-preload-698195" exists ...
	I0830 22:20:32.204897  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.204931  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.205076  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.207516  994624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:20:32.206126  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.209336  994624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:20:32.210840  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:20:32.209276  994624 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:20:32.210862  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:20:32.210877  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:20:32.210890  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.210897  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.214370  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214385  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214769  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.214813  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.214829  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.214841  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.215131  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.215199  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.215346  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.215387  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.215521  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.215580  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.215651  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.215748  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.244173  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I0830 22:20:32.244664  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.245311  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.245343  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.245760  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.246361  994624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:20:32.246416  994624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:20:32.263737  994624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0830 22:20:32.264177  994624 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:20:32.264737  994624 main.go:141] libmachine: Using API Version  1
	I0830 22:20:32.264761  994624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:20:32.265106  994624 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:20:32.265342  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetState
	I0830 22:20:32.266996  994624 main.go:141] libmachine: (no-preload-698195) Calling .DriverName
	I0830 22:20:32.267406  994624 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:20:32.267430  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:20:32.267454  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHHostname
	I0830 22:20:32.270345  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.270799  994624 main.go:141] libmachine: (no-preload-698195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:d1", ip: ""} in network mk-no-preload-698195: {Iface:virbr4 ExpiryTime:2023-08-30 23:19:38 +0000 UTC Type:0 Mac:52:54:00:5b:fc:d1 Iaid: IPaddr:192.168.72.28 Prefix:24 Hostname:no-preload-698195 Clientid:01:52:54:00:5b:fc:d1}
	I0830 22:20:32.270829  994624 main.go:141] libmachine: (no-preload-698195) DBG | domain no-preload-698195 has defined IP address 192.168.72.28 and MAC address 52:54:00:5b:fc:d1 in network mk-no-preload-698195
	I0830 22:20:32.271021  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHPort
	I0830 22:20:32.271215  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHKeyPath
	I0830 22:20:32.271403  994624 main.go:141] libmachine: (no-preload-698195) Calling .GetSSHUsername
	I0830 22:20:32.271526  994624 sshutil.go:53] new ssh client: &{IP:192.168.72.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/no-preload-698195/id_rsa Username:docker}
	I0830 22:20:32.362089  994624 node_ready.go:35] waiting up to 6m0s for node "no-preload-698195" to be "Ready" ...
	I0830 22:20:32.362281  994624 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0830 22:20:32.371216  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:20:32.372220  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:20:32.372240  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:20:32.396916  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:20:32.396942  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:20:32.417651  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:20:32.430668  994624 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:20:32.430699  994624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:20:32.476147  994624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:20:33.655453  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.284190116s)
	I0830 22:20:33.655495  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.237806074s)
	I0830 22:20:33.655515  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655532  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.655519  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655602  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.655854  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.655875  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.655885  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.655894  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656045  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656082  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656095  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656115  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656160  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656169  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656180  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.656195  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656394  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656432  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656437  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.656455  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.656465  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.656729  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.656741  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.656754  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.802947  994624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326756295s)
	I0830 22:20:33.802994  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.803016  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.803349  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.803371  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.803381  994624 main.go:141] libmachine: Making call to close driver server
	I0830 22:20:33.803391  994624 main.go:141] libmachine: (no-preload-698195) Calling .Close
	I0830 22:20:33.803393  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.803632  994624 main.go:141] libmachine: (no-preload-698195) DBG | Closing plugin on server side
	I0830 22:20:33.803682  994624 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:20:33.803700  994624 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:20:33.803720  994624 addons.go:467] Verifying addon metrics-server=true in "no-preload-698195"
	I0830 22:20:33.805489  994624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:20:33.462414  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:35.961487  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:33.806934  994624 addons.go:502] enable addons completed in 1.650789204s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:20:34.550814  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:36.551274  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:38.551355  994624 node_ready.go:58] node "no-preload-698195" has status "Ready":"False"
	I0830 22:20:37.963175  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:40.462510  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:39.550464  994624 node_ready.go:49] node "no-preload-698195" has status "Ready":"True"
	I0830 22:20:39.550505  994624 node_ready.go:38] duration metric: took 7.188369926s waiting for node "no-preload-698195" to be "Ready" ...
	I0830 22:20:39.550516  994624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:39.556533  994624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.562470  994624 pod_ready.go:92] pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:39.562498  994624 pod_ready.go:81] duration metric: took 5.934964ms waiting for pod "coredns-5dd5756b68-hlwf8" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.562511  994624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.568348  994624 pod_ready.go:92] pod "etcd-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:39.568371  994624 pod_ready.go:81] duration metric: took 5.853085ms waiting for pod "etcd-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:39.568380  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:41.593857  994624 pod_ready.go:102] pod "kube-apiserver-no-preload-698195" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:42.594544  994624 pod_ready.go:92] pod "kube-apiserver-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.594572  994624 pod_ready.go:81] duration metric: took 3.026185311s waiting for pod "kube-apiserver-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.594586  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.599820  994624 pod_ready.go:92] pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.599844  994624 pod_ready.go:81] duration metric: took 5.249213ms waiting for pod "kube-controller-manager-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.599856  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.751073  994624 pod_ready.go:92] pod "kube-proxy-5fjvd" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:42.751096  994624 pod_ready.go:81] duration metric: took 151.233562ms waiting for pod "kube-proxy-5fjvd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.751105  994624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:43.150620  994624 pod_ready.go:92] pod "kube-scheduler-no-preload-698195" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:43.150646  994624 pod_ready.go:81] duration metric: took 399.535323ms waiting for pod "kube-scheduler-no-preload-698195" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:43.150656  994624 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:42.464235  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:44.960831  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:46.961923  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:45.458489  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:47.958322  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:45.165236  995603 kubeadm.go:787] kubelet initialised
	I0830 22:20:45.165261  995603 kubeadm.go:788] duration metric: took 48.999634631s waiting for restarted kubelet to initialise ...
	I0830 22:20:45.165269  995603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:20:45.170939  995603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.176235  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.176259  995603 pod_ready.go:81] duration metric: took 5.296469ms waiting for pod "coredns-5644d7b6d9-872nn" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.176271  995603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.180703  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.180718  995603 pod_ready.go:81] duration metric: took 4.44114ms waiting for pod "coredns-5644d7b6d9-lqn5v" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.180725  995603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.185225  995603 pod_ready.go:92] pod "etcd-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.185244  995603 pod_ready.go:81] duration metric: took 4.512736ms waiting for pod "etcd-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.185255  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.190403  995603 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.190425  995603 pod_ready.go:81] duration metric: took 5.162774ms waiting for pod "kube-apiserver-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.190436  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.564427  995603 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.564460  995603 pod_ready.go:81] duration metric: took 374.00421ms waiting for pod "kube-controller-manager-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.564473  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qg82w" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.964836  995603 pod_ready.go:92] pod "kube-proxy-qg82w" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:45.964857  995603 pod_ready.go:81] duration metric: took 400.377393ms waiting for pod "kube-proxy-qg82w" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:45.964866  995603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:46.364023  995603 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace has status "Ready":"True"
	I0830 22:20:46.364046  995603 pod_ready.go:81] duration metric: took 399.172301ms waiting for pod "kube-scheduler-old-k8s-version-250163" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:46.364060  995603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace to be "Ready" ...
	I0830 22:20:48.672124  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:48.962198  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.461425  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:49.958485  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.959424  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:51.170855  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:53.172690  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:53.962708  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:56.461729  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:54.458026  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:56.458124  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:58.459811  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:55.669393  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:57.670454  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:59.670654  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:20:58.463098  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:00.962495  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:00.960274  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:03.457998  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:02.170872  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:04.670725  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:03.460674  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:05.461496  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:05.459727  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:07.959179  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:06.671066  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.169869  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:07.463765  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.961943  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:09.959351  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:12.458921  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:11.171435  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:13.171597  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:12.461881  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:14.961416  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:14.459572  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:16.960064  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:15.670176  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:18.170049  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:17.460985  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:19.462323  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:21.963325  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:19.459085  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:21.460169  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:20.671600  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:23.169931  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:24.464683  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:26.962740  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:23.958014  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:26.458502  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:28.458654  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:25.670985  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:28.171321  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:29.461798  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:31.961714  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:30.464431  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:32.958557  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:30.669588  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:32.670695  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.671313  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.463531  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:36.960658  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:34.960256  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:37.460047  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:37.168958  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:39.170995  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:38.961145  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:40.961870  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:39.958213  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:41.958373  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:41.670302  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:44.171198  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:43.461666  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:45.461738  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:44.459123  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:46.459226  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:48.459428  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:46.670708  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:48.671826  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:47.462306  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:49.462771  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:51.962010  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:50.958149  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:52.958493  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:51.169610  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:53.170386  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:54.461116  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:56.959735  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:54.959069  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:57.458784  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:55.172123  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:57.670323  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:59.671985  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:58.961225  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:00.961822  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:21:59.959058  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:01.959700  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:02.170880  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:04.171473  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:02.961938  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:05.461758  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:03.960213  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:06.458196  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:08.458500  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:06.671998  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:09.169979  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:07.962031  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:10.460716  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:10.960753  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:13.459638  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:11.669885  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:13.670821  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:12.461433  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:14.463156  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:16.961558  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:15.459765  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:17.959192  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:15.671350  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:18.170569  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:19.462375  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:21.961785  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:19.959308  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:22.457592  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:20.173424  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:22.671008  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:23.961985  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:25.962149  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:24.458343  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:26.958471  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:25.169264  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:27.181579  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:29.670923  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:27.964954  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:30.461530  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:29.458262  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:31.463334  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:32.171662  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:34.670239  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:32.961287  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:34.961787  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:33.957827  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:35.958367  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:37.960259  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:36.671642  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:39.169834  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:37.462107  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:39.961576  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:41.961773  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:40.458367  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:42.458710  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:41.671303  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:44.170994  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:43.964448  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.461777  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:44.958652  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.960005  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:46.171108  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:48.670866  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:48.462315  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:50.462456  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:49.459011  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:51.958137  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:51.170020  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:53.171135  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:52.462694  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:54.962055  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:53.958728  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:55.959278  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:57.959555  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:55.671421  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:58.169881  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:57.461322  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:22:59.461865  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:01.963541  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:00.458148  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:02.458834  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:00.170265  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:02.170719  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:04.670111  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:03.967458  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:05.972793  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:04.958722  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:07.458954  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:06.670434  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:08.671269  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:08.461195  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:10.961859  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:09.458999  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:11.958146  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:11.170482  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.670156  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.462648  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.463851  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:13.958659  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.962293  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:18.458707  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:15.670647  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:18.170462  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:17.960881  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:19.962032  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:20.959370  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:23.459653  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:20.670329  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:23.169817  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:22.461024  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:24.461537  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:26.960897  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:25.958696  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:28.459488  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:25.671024  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:28.170228  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:29.461009  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:31.461891  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:30.958318  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:32.958723  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:30.170683  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:32.670966  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:33.462005  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:35.960841  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:34.959278  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.458068  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:35.170093  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.671411  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:37.961501  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:40.460893  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:39.458824  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:41.461623  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:40.170169  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:42.670892  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:42.461840  995192 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:43.154742  995192 pod_ready.go:81] duration metric: took 4m0.000931927s waiting for pod "metrics-server-57f55c9bc5-p8pp2" in "kube-system" namespace to be "Ready" ...
	E0830 22:23:43.154776  995192 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:23:43.154798  995192 pod_ready.go:38] duration metric: took 4m7.830262728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:23:43.154853  995192 kubeadm.go:640] restartCluster took 4m30.336637887s
	W0830 22:23:43.154961  995192 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0830 22:23:43.155001  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0830 22:23:43.959940  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:46.458406  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:45.170898  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:47.670457  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:48.957451  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:51.457818  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:50.171371  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:52.171468  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:54.670175  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:53.958105  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:56.458176  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:57.169990  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:59.177173  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:23:58.957583  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:00.958404  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:02.958866  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:01.670484  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:03.671368  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:05.457466  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:07.457893  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:05.671480  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:08.170128  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:09.458376  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:11.958335  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:10.171221  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:12.171398  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:14.171694  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:14.432406  995192 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.277378744s)
	I0830 22:24:14.432498  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:14.446038  995192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:24:14.455354  995192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:24:14.464292  995192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:24:14.464332  995192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0830 22:24:14.680764  995192 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:24:13.965662  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:16.460984  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:16.171891  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:18.671072  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:18.958205  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:20.959096  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:23.459244  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:20.671733  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:22.671947  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:24.677772  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:24.927380  995192 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 22:24:24.927462  995192 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:24:24.927559  995192 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:24:24.927697  995192 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:24:24.927843  995192 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:24:24.927938  995192 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:24:24.929775  995192 out.go:204]   - Generating certificates and keys ...
	I0830 22:24:24.929895  995192 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:24:24.930004  995192 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:24:24.930118  995192 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 22:24:24.930202  995192 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0830 22:24:24.930321  995192 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0830 22:24:24.930408  995192 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0830 22:24:24.930485  995192 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0830 22:24:24.930559  995192 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0830 22:24:24.930658  995192 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 22:24:24.930756  995192 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 22:24:24.930821  995192 kubeadm.go:322] [certs] Using the existing "sa" key
	I0830 22:24:24.930922  995192 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:24:24.931009  995192 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:24:24.931077  995192 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:24:24.931170  995192 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:24:24.931245  995192 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:24:24.931354  995192 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:24:24.931430  995192 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:24:24.934341  995192 out.go:204]   - Booting up control plane ...
	I0830 22:24:24.934422  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:24:24.934524  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:24:24.934580  995192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:24:24.934689  995192 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:24:24.934770  995192 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:24:24.934809  995192 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:24:24.934936  995192 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:24:24.935014  995192 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003378 seconds
	I0830 22:24:24.935150  995192 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:24:24.935261  995192 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:24:24.935317  995192 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:24:24.935490  995192 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-791007 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 22:24:24.935540  995192 kubeadm.go:322] [bootstrap-token] Using token: 3t39h1.cgypp2756rpdn3ql
	I0830 22:24:24.937035  995192 out.go:204]   - Configuring RBAC rules ...
	I0830 22:24:24.937140  995192 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:24:24.937246  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 22:24:24.937428  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:24:24.937619  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:24:24.937762  995192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:24:24.937883  995192 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:24:24.938044  995192 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 22:24:24.938105  995192 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:24:24.938178  995192 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:24:24.938197  995192 kubeadm.go:322] 
	I0830 22:24:24.938277  995192 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:24:24.938290  995192 kubeadm.go:322] 
	I0830 22:24:24.938389  995192 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:24:24.938398  995192 kubeadm.go:322] 
	I0830 22:24:24.938429  995192 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:24:24.938506  995192 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:24:24.938577  995192 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:24:24.938586  995192 kubeadm.go:322] 
	I0830 22:24:24.938658  995192 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 22:24:24.938681  995192 kubeadm.go:322] 
	I0830 22:24:24.938745  995192 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 22:24:24.938754  995192 kubeadm.go:322] 
	I0830 22:24:24.938825  995192 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:24:24.938930  995192 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:24:24.939065  995192 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:24:24.939076  995192 kubeadm.go:322] 
	I0830 22:24:24.939160  995192 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 22:24:24.939266  995192 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:24:24.939280  995192 kubeadm.go:322] 
	I0830 22:24:24.939367  995192 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 3t39h1.cgypp2756rpdn3ql \
	I0830 22:24:24.939452  995192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:24:24.939473  995192 kubeadm.go:322] 	--control-plane 
	I0830 22:24:24.939479  995192 kubeadm.go:322] 
	I0830 22:24:24.939597  995192 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:24:24.939606  995192 kubeadm.go:322] 
	I0830 22:24:24.939685  995192 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 3t39h1.cgypp2756rpdn3ql \
	I0830 22:24:24.939848  995192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:24:24.939880  995192 cni.go:84] Creating CNI manager for ""
	I0830 22:24:24.939916  995192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:24:24.942544  995192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:24:24.943961  995192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:24:24.990449  995192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:24:25.040966  995192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:24:25.041042  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.041041  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=default-k8s-diff-port-791007 minikube.k8s.io/updated_at=2023_08_30T22_24_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.441321  995192 ops.go:34] apiserver oom_adj: -16
	I0830 22:24:25.441492  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.557357  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:26.163303  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:26.663721  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:25.459794  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.957287  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.171894  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:29.671326  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:27.163474  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:27.664036  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:28.163187  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:28.663338  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.163719  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.663846  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:30.163288  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:30.663346  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:31.163165  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:31.663996  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:29.958583  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:31.960227  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:31.671923  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:34.171143  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:32.163631  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:32.663347  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:33.163634  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:33.663228  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:34.163600  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:34.663994  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:35.163597  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:35.663419  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:36.163764  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:36.663168  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:37.163646  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:37.663613  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:38.163643  995192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:24:38.264223  995192 kubeadm.go:1081] duration metric: took 13.22324453s to wait for elevateKubeSystemPrivileges.
	I0830 22:24:38.264262  995192 kubeadm.go:406] StartCluster complete in 5m25.484553135s
	I0830 22:24:38.264286  995192 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:24:38.264411  995192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:24:38.266553  995192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:24:38.266800  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:24:38.266990  995192 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:24:38.267105  995192 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267117  995192 config.go:182] Loaded profile config "default-k8s-diff-port-791007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:24:38.267126  995192 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.267141  995192 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:24:38.267163  995192 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267184  995192 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-791007"
	I0830 22:24:38.267209  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.267214  995192 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.267234  995192 addons.go:240] addon metrics-server should already be in state true
	I0830 22:24:38.267207  995192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-791007"
	I0830 22:24:38.267330  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.267664  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267735  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.267806  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267797  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.267851  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.267866  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.285812  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0830 22:24:38.286287  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.287008  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.287036  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.287384  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0830 22:24:38.287488  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I0830 22:24:38.287526  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.287808  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.287949  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.288154  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.288200  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.288370  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.288516  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.288582  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.288562  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.288947  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.289135  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.289343  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.289569  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.289610  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.299364  995192 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-791007"
	W0830 22:24:38.299392  995192 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:24:38.299422  995192 host.go:66] Checking if "default-k8s-diff-port-791007" exists ...
	I0830 22:24:38.299824  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.299861  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.305325  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0830 22:24:38.305834  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.306214  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I0830 22:24:38.306525  995192 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-791007" context rescaled to 1 replicas
	I0830 22:24:38.306561  995192 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.104 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:24:38.308424  995192 out.go:177] * Verifying Kubernetes components...
	I0830 22:24:38.306646  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.306688  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.309840  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:38.309911  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.310245  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.310362  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.310381  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.310433  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.310801  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.310980  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.312319  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.314072  995192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:24:38.313018  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.315723  995192 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:24:38.315742  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:24:38.315759  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.317188  995192 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:24:34.457685  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:36.458268  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.459052  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:36.171434  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.173228  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:38.318441  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:24:38.318465  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:24:38.318488  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.319537  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.320338  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.320365  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.320640  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.321238  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.321431  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.321733  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.322284  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.322691  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.322778  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.322887  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.323058  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.323205  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.323265  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.328412  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0830 22:24:38.328853  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.329468  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.329479  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.329898  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.330379  995192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:24:38.330395  995192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:24:38.345318  995192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0830 22:24:38.345781  995192 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:24:38.346309  995192 main.go:141] libmachine: Using API Version  1
	I0830 22:24:38.346329  995192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:24:38.346665  995192 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:24:38.346886  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetState
	I0830 22:24:38.348620  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .DriverName
	I0830 22:24:38.348922  995192 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:24:38.348941  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:24:38.348961  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHHostname
	I0830 22:24:38.351758  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.352206  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:1e", ip: ""} in network mk-default-k8s-diff-port-791007: {Iface:virbr3 ExpiryTime:2023-08-30 23:18:57 +0000 UTC Type:0 Mac:52:54:00:1e:2e:1e Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:default-k8s-diff-port-791007 Clientid:01:52:54:00:1e:2e:1e}
	I0830 22:24:38.352233  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | domain default-k8s-diff-port-791007 has defined IP address 192.168.61.104 and MAC address 52:54:00:1e:2e:1e in network mk-default-k8s-diff-port-791007
	I0830 22:24:38.352357  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHPort
	I0830 22:24:38.352562  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHKeyPath
	I0830 22:24:38.352787  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .GetSSHUsername
	I0830 22:24:38.352918  995192 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/default-k8s-diff-port-791007/id_rsa Username:docker}
	I0830 22:24:38.474078  995192 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-791007" to be "Ready" ...
	I0830 22:24:38.474205  995192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:24:38.479269  995192 node_ready.go:49] node "default-k8s-diff-port-791007" has status "Ready":"True"
	I0830 22:24:38.479294  995192 node_ready.go:38] duration metric: took 5.181356ms waiting for node "default-k8s-diff-port-791007" to be "Ready" ...
	I0830 22:24:38.479305  995192 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:38.486715  995192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ck692" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:38.508419  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:24:38.508443  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:24:38.515075  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:24:38.532789  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:24:38.549460  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:24:38.549488  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:24:38.593580  995192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:24:38.593614  995192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:24:38.637965  995192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:24:40.093211  995192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.618968297s)
	I0830 22:24:40.093259  995192 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0830 22:24:40.526723  995192 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ck692" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ck692" not found
	I0830 22:24:40.526748  995192 pod_ready.go:81] duration metric: took 2.040009497s waiting for pod "coredns-5dd5756b68-ck692" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:40.526757  995192 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ck692" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ck692" not found
	I0830 22:24:40.526765  995192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:40.552258  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.037149365s)
	I0830 22:24:40.552312  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019488451s)
	I0830 22:24:40.552317  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552381  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552351  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552468  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552696  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.552714  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.552724  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552734  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.552891  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.552902  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.552918  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.552927  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.553018  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.553114  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553132  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.553170  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.553202  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553210  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.553219  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.553225  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.553478  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.553493  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.776628  995192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.138598233s)
	I0830 22:24:40.776714  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.776731  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.777199  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.777224  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.777246  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.777256  995192 main.go:141] libmachine: Making call to close driver server
	I0830 22:24:40.777270  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) Calling .Close
	I0830 22:24:40.777546  995192 main.go:141] libmachine: (default-k8s-diff-port-791007) DBG | Closing plugin on server side
	I0830 22:24:40.777626  995192 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:24:40.777647  995192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:24:40.777667  995192 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-791007"
	I0830 22:24:40.779719  995192 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:24:40.781134  995192 addons.go:502] enable addons completed in 2.51415908s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:24:40.459185  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:42.958731  994624 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.150847  994624 pod_ready.go:81] duration metric: took 4m0.000170406s waiting for pod "metrics-server-57f55c9bc5-nfbkd" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:43.150881  994624 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:24:43.150893  994624 pod_ready.go:38] duration metric: took 4m3.600363648s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:43.150919  994624 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:24:43.150964  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:43.151043  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:43.199383  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:43.199412  994624 cri.go:89] found id: ""
	I0830 22:24:43.199420  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:43.199479  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.204289  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:43.204371  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:43.247303  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:43.247329  994624 cri.go:89] found id: ""
	I0830 22:24:43.247340  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:43.247396  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.252955  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:43.253024  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:43.286292  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:43.286318  994624 cri.go:89] found id: ""
	I0830 22:24:43.286327  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:43.286386  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.290585  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:43.290653  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:43.323616  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:43.323645  994624 cri.go:89] found id: ""
	I0830 22:24:43.323655  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:43.323729  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.328256  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:43.328326  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:43.363566  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:43.363595  994624 cri.go:89] found id: ""
	I0830 22:24:43.363605  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:43.363666  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.368006  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:43.368067  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:43.403728  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:43.403752  994624 cri.go:89] found id: ""
	I0830 22:24:43.403761  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:43.403833  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.407957  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:43.408020  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:43.438864  994624 cri.go:89] found id: ""
	I0830 22:24:43.438893  994624 logs.go:284] 0 containers: []
	W0830 22:24:43.438903  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:43.438911  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:43.438976  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:43.478905  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:43.478935  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:43.478942  994624 cri.go:89] found id: ""
	I0830 22:24:43.478951  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:43.479015  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.486919  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:43.496040  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:43.496070  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:43.669727  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:43.669764  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:43.712471  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:43.712508  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:43.746949  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:43.746988  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:42.573674  995192 pod_ready.go:92] pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.573706  995192 pod_ready.go:81] duration metric: took 2.046935361s waiting for pod "coredns-5dd5756b68-jwn87" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.573716  995192 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.579433  995192 pod_ready.go:92] pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.579450  995192 pod_ready.go:81] duration metric: took 5.72841ms waiting for pod "etcd-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.579458  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.584499  995192 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.584519  995192 pod_ready.go:81] duration metric: took 5.055504ms waiting for pod "kube-apiserver-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.584527  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.678045  995192 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:42.678071  995192 pod_ready.go:81] duration metric: took 93.537153ms waiting for pod "kube-controller-manager-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:42.678084  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbdvk" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.082548  995192 pod_ready.go:92] pod "kube-proxy-bbdvk" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:43.082576  995192 pod_ready.go:81] duration metric: took 404.485397ms waiting for pod "kube-proxy-bbdvk" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.082585  995192 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.479813  995192 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace has status "Ready":"True"
	I0830 22:24:43.479840  995192 pod_ready.go:81] duration metric: took 397.248046ms waiting for pod "kube-scheduler-default-k8s-diff-port-791007" in "kube-system" namespace to be "Ready" ...
	I0830 22:24:43.479851  995192 pod_ready.go:38] duration metric: took 5.000533366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:43.479872  995192 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:24:43.479956  995192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:24:43.498558  995192 api_server.go:72] duration metric: took 5.191959207s to wait for apiserver process to appear ...
	I0830 22:24:43.498583  995192 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:24:43.498603  995192 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8444/healthz ...
	I0830 22:24:43.504260  995192 api_server.go:279] https://192.168.61.104:8444/healthz returned 200:
	ok
	I0830 22:24:43.505566  995192 api_server.go:141] control plane version: v1.28.1
	I0830 22:24:43.505589  995192 api_server.go:131] duration metric: took 6.997863ms to wait for apiserver health ...
	I0830 22:24:43.505598  995192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:24:43.682798  995192 system_pods.go:59] 8 kube-system pods found
	I0830 22:24:43.682837  995192 system_pods.go:61] "coredns-5dd5756b68-jwn87" [984f4b65-9261-4952-a368-5fac2fa14bd7] Running
	I0830 22:24:43.682846  995192 system_pods.go:61] "etcd-default-k8s-diff-port-791007" [156cdcfd-fa81-4542-8506-18b3ab61f725] Running
	I0830 22:24:43.682856  995192 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-791007" [841dcf3a-9ab5-4fbf-a20a-4179d4a793fd] Running
	I0830 22:24:43.682863  995192 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-791007" [4cef1264-90fb-47fc-a155-4cb267c961aa] Running
	I0830 22:24:43.682870  995192 system_pods.go:61] "kube-proxy-bbdvk" [dd98a34a-f2f9-4e73-a751-e68a1addb89f] Running
	I0830 22:24:43.682876  995192 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-791007" [11bf5dce-8d54-4029-a9d2-423e278b6181] Running
	I0830 22:24:43.682887  995192 system_pods.go:61] "metrics-server-57f55c9bc5-dllmg" [6826d918-a2ac-4744-8145-f6d7599499af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:43.682897  995192 system_pods.go:61] "storage-provisioner" [fb41168e-19d2-4b57-a2fb-ab0b3d0ff836] Running
	I0830 22:24:43.682909  995192 system_pods.go:74] duration metric: took 177.304345ms to wait for pod list to return data ...
	I0830 22:24:43.682919  995192 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:24:43.878616  995192 default_sa.go:45] found service account: "default"
	I0830 22:24:43.878643  995192 default_sa.go:55] duration metric: took 195.70884ms for default service account to be created ...
	I0830 22:24:43.878654  995192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:24:44.083123  995192 system_pods.go:86] 8 kube-system pods found
	I0830 22:24:44.083155  995192 system_pods.go:89] "coredns-5dd5756b68-jwn87" [984f4b65-9261-4952-a368-5fac2fa14bd7] Running
	I0830 22:24:44.083161  995192 system_pods.go:89] "etcd-default-k8s-diff-port-791007" [156cdcfd-fa81-4542-8506-18b3ab61f725] Running
	I0830 22:24:44.083165  995192 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-791007" [841dcf3a-9ab5-4fbf-a20a-4179d4a793fd] Running
	I0830 22:24:44.083170  995192 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-791007" [4cef1264-90fb-47fc-a155-4cb267c961aa] Running
	I0830 22:24:44.083177  995192 system_pods.go:89] "kube-proxy-bbdvk" [dd98a34a-f2f9-4e73-a751-e68a1addb89f] Running
	I0830 22:24:44.083181  995192 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-791007" [11bf5dce-8d54-4029-a9d2-423e278b6181] Running
	I0830 22:24:44.083187  995192 system_pods.go:89] "metrics-server-57f55c9bc5-dllmg" [6826d918-a2ac-4744-8145-f6d7599499af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:44.083194  995192 system_pods.go:89] "storage-provisioner" [fb41168e-19d2-4b57-a2fb-ab0b3d0ff836] Running
	I0830 22:24:44.083203  995192 system_pods.go:126] duration metric: took 204.542978ms to wait for k8s-apps to be running ...
	I0830 22:24:44.083216  995192 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:24:44.083297  995192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:44.098110  995192 system_svc.go:56] duration metric: took 14.88196ms WaitForService to wait for kubelet.
	I0830 22:24:44.098143  995192 kubeadm.go:581] duration metric: took 5.7915497s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:24:44.098211  995192 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:24:44.278751  995192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:24:44.278802  995192 node_conditions.go:123] node cpu capacity is 2
	I0830 22:24:44.278814  995192 node_conditions.go:105] duration metric: took 180.597923ms to run NodePressure ...
	I0830 22:24:44.278825  995192 start.go:228] waiting for startup goroutines ...
	I0830 22:24:44.278831  995192 start.go:233] waiting for cluster config update ...
	I0830 22:24:44.278841  995192 start.go:242] writing updated cluster config ...
	I0830 22:24:44.279208  995192 ssh_runner.go:195] Run: rm -f paused
	I0830 22:24:44.332074  995192 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:24:44.334502  995192 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-791007" cluster and "default" namespace by default
	I0830 22:24:40.672327  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.171136  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:43.780116  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:43.780147  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:43.824462  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:43.824494  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:43.875847  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:43.875881  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:43.937533  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:43.937582  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:43.950917  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:43.950948  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:43.989236  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:43.989265  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:44.025171  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:44.025218  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:44.644566  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:44.644609  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:44.692321  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:44.692356  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:47.229304  994624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:24:47.252442  994624 api_server.go:72] duration metric: took 4m15.086891336s to wait for apiserver process to appear ...
	I0830 22:24:47.252476  994624 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:24:47.252521  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:47.252593  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:47.286367  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:47.286397  994624 cri.go:89] found id: ""
	I0830 22:24:47.286410  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:47.286461  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.290812  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:47.290883  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:47.324349  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:47.324376  994624 cri.go:89] found id: ""
	I0830 22:24:47.324386  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:47.324440  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.329002  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:47.329072  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:47.362954  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:47.362985  994624 cri.go:89] found id: ""
	I0830 22:24:47.362996  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:47.363062  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.367498  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:47.367587  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:47.398450  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:47.398478  994624 cri.go:89] found id: ""
	I0830 22:24:47.398489  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:47.398550  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.402646  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:47.402741  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:47.438663  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:47.438691  994624 cri.go:89] found id: ""
	I0830 22:24:47.438701  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:47.438769  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.443046  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:47.443114  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:47.472698  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:47.472725  994624 cri.go:89] found id: ""
	I0830 22:24:47.472733  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:47.472792  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.477075  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:47.477150  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:47.507099  994624 cri.go:89] found id: ""
	I0830 22:24:47.507138  994624 logs.go:284] 0 containers: []
	W0830 22:24:47.507148  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:47.507157  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:47.507232  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:47.540635  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:47.540661  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:47.540667  994624 cri.go:89] found id: ""
	I0830 22:24:47.540676  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:47.540734  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.545274  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:47.549659  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:47.549681  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:47.605419  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:47.605460  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:47.646819  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:47.646856  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:47.684772  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:47.684801  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:47.731741  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:47.731791  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:47.762713  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:47.762745  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:48.266510  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:48.266557  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:48.315124  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:48.315164  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:48.332407  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:48.332447  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:48.463670  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:48.463710  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:48.498034  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:48.498067  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:48.528326  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:48.528372  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:48.563858  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:48.563893  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:45.670559  995603 pod_ready.go:102] pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace has status "Ready":"False"
	I0830 22:24:46.364206  995603 pod_ready.go:81] duration metric: took 4m0.000126235s waiting for pod "metrics-server-74d5856cc6-7vrzq" in "kube-system" namespace to be "Ready" ...
	E0830 22:24:46.364246  995603 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0830 22:24:46.364267  995603 pod_ready.go:38] duration metric: took 4m1.19899008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:24:46.364298  995603 kubeadm.go:640] restartCluster took 5m11.375966766s
	W0830 22:24:46.364364  995603 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0830 22:24:46.364394  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0830 22:24:51.095064  994624 api_server.go:253] Checking apiserver healthz at https://192.168.72.28:8443/healthz ...
	I0830 22:24:51.106674  994624 api_server.go:279] https://192.168.72.28:8443/healthz returned 200:
	ok
	I0830 22:24:51.108320  994624 api_server.go:141] control plane version: v1.28.1
	I0830 22:24:51.108339  994624 api_server.go:131] duration metric: took 3.855856321s to wait for apiserver health ...
	I0830 22:24:51.108347  994624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:24:51.108375  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0830 22:24:51.108422  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0830 22:24:51.140030  994624 cri.go:89] found id: "2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:51.140059  994624 cri.go:89] found id: ""
	I0830 22:24:51.140069  994624 logs.go:284] 1 containers: [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373]
	I0830 22:24:51.140133  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.144302  994624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0830 22:24:51.144375  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0830 22:24:51.181915  994624 cri.go:89] found id: "c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:51.181944  994624 cri.go:89] found id: ""
	I0830 22:24:51.181953  994624 logs.go:284] 1 containers: [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2]
	I0830 22:24:51.182007  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.187104  994624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0830 22:24:51.187171  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0830 22:24:51.220763  994624 cri.go:89] found id: "61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:51.220794  994624 cri.go:89] found id: ""
	I0830 22:24:51.220806  994624 logs.go:284] 1 containers: [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615]
	I0830 22:24:51.220890  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.225368  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0830 22:24:51.225443  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0830 22:24:51.263131  994624 cri.go:89] found id: "94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:51.263155  994624 cri.go:89] found id: ""
	I0830 22:24:51.263164  994624 logs.go:284] 1 containers: [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6]
	I0830 22:24:51.263231  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.268531  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0830 22:24:51.268586  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0830 22:24:51.307119  994624 cri.go:89] found id: "2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:51.307145  994624 cri.go:89] found id: ""
	I0830 22:24:51.307154  994624 logs.go:284] 1 containers: [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3]
	I0830 22:24:51.307224  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.311914  994624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0830 22:24:51.311988  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0830 22:24:51.341363  994624 cri.go:89] found id: "5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:51.341391  994624 cri.go:89] found id: ""
	I0830 22:24:51.341402  994624 logs.go:284] 1 containers: [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512]
	I0830 22:24:51.341461  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.345501  994624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0830 22:24:51.345570  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0830 22:24:51.378276  994624 cri.go:89] found id: ""
	I0830 22:24:51.378311  994624 logs.go:284] 0 containers: []
	W0830 22:24:51.378322  994624 logs.go:286] No container was found matching "kindnet"
	I0830 22:24:51.378329  994624 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0830 22:24:51.378398  994624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0830 22:24:51.416207  994624 cri.go:89] found id: "a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:51.416228  994624 cri.go:89] found id: "c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:51.416232  994624 cri.go:89] found id: ""
	I0830 22:24:51.416245  994624 logs.go:284] 2 containers: [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6]
	I0830 22:24:51.416295  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.421034  994624 ssh_runner.go:195] Run: which crictl
	I0830 22:24:51.424911  994624 logs.go:123] Gathering logs for kube-proxy [2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3] ...
	I0830 22:24:51.424938  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fe23692aaba24d70947425d134e6ca30ed8b62fa458b733ec1ecaa1515b01a3"
	I0830 22:24:51.458543  994624 logs.go:123] Gathering logs for storage-provisioner [a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b] ...
	I0830 22:24:51.458576  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4ec3add6f727381f17cdd6658492cd814a58dc29f91272c01bce2496e4b9d1b"
	I0830 22:24:51.489189  994624 logs.go:123] Gathering logs for CRI-O ...
	I0830 22:24:51.489223  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0830 22:24:52.074879  994624 logs.go:123] Gathering logs for dmesg ...
	I0830 22:24:52.074924  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0830 22:24:52.091316  994624 logs.go:123] Gathering logs for etcd [c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2] ...
	I0830 22:24:52.091357  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6594d2e258e63ab6424294552c3893ce88eadaffaeca18a688e5584c7b67ca2"
	I0830 22:24:52.131564  994624 logs.go:123] Gathering logs for coredns [61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615] ...
	I0830 22:24:52.131602  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61c09841e92e9b9b1902d28ba68fec8882b656f037d8411bb75d948754295615"
	I0830 22:24:52.168850  994624 logs.go:123] Gathering logs for kube-scheduler [94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6] ...
	I0830 22:24:52.168879  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94b2663b3d51d84ca318bef7acc101503511acab635e72712501b60ea50416f6"
	I0830 22:24:52.200329  994624 logs.go:123] Gathering logs for storage-provisioner [c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6] ...
	I0830 22:24:52.200358  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c00d7aca5019da8307dbf90c9231d8b08416d6d6889b89b5f48c777c324f86a6"
	I0830 22:24:52.230767  994624 logs.go:123] Gathering logs for container status ...
	I0830 22:24:52.230799  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0830 22:24:52.276139  994624 logs.go:123] Gathering logs for kubelet ...
	I0830 22:24:52.276177  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0830 22:24:52.330487  994624 logs.go:123] Gathering logs for describe nodes ...
	I0830 22:24:52.330523  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0830 22:24:52.469305  994624 logs.go:123] Gathering logs for kube-apiserver [2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373] ...
	I0830 22:24:52.469336  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2aff15ad720bff907911ed4bdc49b1846566093a46fc08ccf7e999c8e129a373"
	I0830 22:24:52.536395  994624 logs.go:123] Gathering logs for kube-controller-manager [5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512] ...
	I0830 22:24:52.536432  994624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f90117987e5bc977c5066bf69003409da4a7ddb880f8ba7d43a5fda35d71512"
	I0830 22:24:55.089149  994624 system_pods.go:59] 8 kube-system pods found
	I0830 22:24:55.089184  994624 system_pods.go:61] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running
	I0830 22:24:55.089194  994624 system_pods.go:61] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running
	I0830 22:24:55.089198  994624 system_pods.go:61] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running
	I0830 22:24:55.089203  994624 system_pods.go:61] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running
	I0830 22:24:55.089207  994624 system_pods.go:61] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running
	I0830 22:24:55.089211  994624 system_pods.go:61] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running
	I0830 22:24:55.089217  994624 system_pods.go:61] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:55.089224  994624 system_pods.go:61] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running
	I0830 22:24:55.089230  994624 system_pods.go:74] duration metric: took 3.980877363s to wait for pod list to return data ...
	I0830 22:24:55.089237  994624 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:24:55.091833  994624 default_sa.go:45] found service account: "default"
	I0830 22:24:55.091862  994624 default_sa.go:55] duration metric: took 2.618667ms for default service account to be created ...
	I0830 22:24:55.091871  994624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:24:55.098108  994624 system_pods.go:86] 8 kube-system pods found
	I0830 22:24:55.098145  994624 system_pods.go:89] "coredns-5dd5756b68-hlwf8" [cdc95a13-1a94-4113-9ec0-569de1c5f49b] Running
	I0830 22:24:55.098154  994624 system_pods.go:89] "etcd-no-preload-698195" [de6cf31e-622b-4bb0-882a-8fc60bdb383e] Running
	I0830 22:24:55.098163  994624 system_pods.go:89] "kube-apiserver-no-preload-698195" [94f50744-1e53-411c-bbe2-749b4de27633] Running
	I0830 22:24:55.098179  994624 system_pods.go:89] "kube-controller-manager-no-preload-698195" [989832fb-00e9-4516-ae2a-8e70e4a97ae0] Running
	I0830 22:24:55.098190  994624 system_pods.go:89] "kube-proxy-5fjvd" [e0c2f2a2-2a89-4f00-8e87-76103160ab55] Running
	I0830 22:24:55.098201  994624 system_pods.go:89] "kube-scheduler-no-preload-698195" [c323330f-da7c-40fa-8e43-f9e79f370143] Running
	I0830 22:24:55.098212  994624 system_pods.go:89] "metrics-server-57f55c9bc5-nfbkd" [450f12e3-6554-41c5-9d41-bee735b322b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:24:55.098233  994624 system_pods.go:89] "storage-provisioner" [c4465b2a-7390-417f-b9ba-f39062e6d685] Running
	I0830 22:24:55.098241  994624 system_pods.go:126] duration metric: took 6.364144ms to wait for k8s-apps to be running ...
	I0830 22:24:55.098250  994624 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:24:55.098297  994624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:24:55.114382  994624 system_svc.go:56] duration metric: took 16.118629ms WaitForService to wait for kubelet.
	I0830 22:24:55.114413  994624 kubeadm.go:581] duration metric: took 4m22.94887118s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:24:55.114435  994624 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:24:55.118227  994624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:24:55.118256  994624 node_conditions.go:123] node cpu capacity is 2
	I0830 22:24:55.118272  994624 node_conditions.go:105] duration metric: took 3.832437ms to run NodePressure ...
	I0830 22:24:55.118287  994624 start.go:228] waiting for startup goroutines ...
	I0830 22:24:55.118295  994624 start.go:233] waiting for cluster config update ...
	I0830 22:24:55.118309  994624 start.go:242] writing updated cluster config ...
	I0830 22:24:55.118611  994624 ssh_runner.go:195] Run: rm -f paused
	I0830 22:24:55.169756  994624 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:24:55.172028  994624 out.go:177] * Done! kubectl is now configured to use "no-preload-698195" cluster and "default" namespace by default
	I0830 22:25:09.359961  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (22.995525599s)
	I0830 22:25:09.360040  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:25:09.375757  995603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:25:09.385118  995603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:25:09.394601  995603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:25:09.394640  995603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0830 22:25:09.454824  995603 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0830 22:25:09.455022  995603 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:25:09.599893  995603 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:25:09.600055  995603 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:25:09.600213  995603 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:25:09.783920  995603 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:25:09.784082  995603 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:25:09.793193  995603 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0830 22:25:09.902777  995603 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:25:09.904820  995603 out.go:204]   - Generating certificates and keys ...
	I0830 22:25:09.904937  995603 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:25:09.905035  995603 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:25:09.905150  995603 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0830 22:25:09.905241  995603 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0830 22:25:09.905350  995603 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0830 22:25:09.905423  995603 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0830 22:25:09.905540  995603 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0830 22:25:09.905622  995603 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0830 22:25:09.905799  995603 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0830 22:25:09.905918  995603 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0830 22:25:09.905978  995603 kubeadm.go:322] [certs] Using the existing "sa" key
	I0830 22:25:09.906052  995603 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:25:10.141265  995603 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:25:10.238428  995603 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:25:10.387118  995603 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:25:10.620307  995603 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:25:10.625802  995603 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:25:10.627926  995603 out.go:204]   - Booting up control plane ...
	I0830 22:25:10.629866  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:25:10.635839  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:25:10.638800  995603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:25:10.641079  995603 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:25:10.666312  995603 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:25:20.671894  995603 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004868 seconds
	I0830 22:25:20.672078  995603 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:25:20.687003  995603 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:25:21.215417  995603 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:25:21.215657  995603 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-250163 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0830 22:25:21.726398  995603 kubeadm.go:322] [bootstrap-token] Using token: y3ik1i.subqwfsto1ck6o9y
	I0830 22:25:21.728095  995603 out.go:204]   - Configuring RBAC rules ...
	I0830 22:25:21.728243  995603 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:25:21.735828  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:25:21.741247  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:25:21.744588  995603 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:25:21.747966  995603 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:25:21.835002  995603 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:25:22.157106  995603 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:25:22.157129  995603 kubeadm.go:322] 
	I0830 22:25:22.157207  995603 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:25:22.157221  995603 kubeadm.go:322] 
	I0830 22:25:22.157343  995603 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:25:22.157373  995603 kubeadm.go:322] 
	I0830 22:25:22.157410  995603 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:25:22.157493  995603 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:25:22.157572  995603 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:25:22.157581  995603 kubeadm.go:322] 
	I0830 22:25:22.157661  995603 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:25:22.157779  995603 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:25:22.157877  995603 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:25:22.157894  995603 kubeadm.go:322] 
	I0830 22:25:22.158002  995603 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0830 22:25:22.158104  995603 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:25:22.158119  995603 kubeadm.go:322] 
	I0830 22:25:22.158250  995603 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y3ik1i.subqwfsto1ck6o9y \
	I0830 22:25:22.158415  995603 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a \
	I0830 22:25:22.158457  995603 kubeadm.go:322]     --control-plane 	  
	I0830 22:25:22.158467  995603 kubeadm.go:322] 
	I0830 22:25:22.158555  995603 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:25:22.158566  995603 kubeadm.go:322] 
	I0830 22:25:22.158674  995603 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y3ik1i.subqwfsto1ck6o9y \
	I0830 22:25:22.158820  995603 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:68a7b11342517465b3ed7746a445b594eecb3b7fe864fee8d446ed124b16109a 
	I0830 22:25:22.159148  995603 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:25:22.159192  995603 cni.go:84] Creating CNI manager for ""
	I0830 22:25:22.159205  995603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 22:25:22.160970  995603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:25:22.162353  995603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:25:22.173835  995603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:25:22.192193  995603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:25:22.192332  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=old-k8s-version-250163 minikube.k8s.io/updated_at=2023_08_30T22_25_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.192335  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.440832  995603 ops.go:34] apiserver oom_adj: -16
	I0830 22:25:22.441067  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:22.560349  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:23.171762  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:23.671955  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:24.171789  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:24.671863  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:25.172176  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:25.672262  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:26.172348  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:26.672680  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:27.171856  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:27.671722  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:28.171712  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:28.671959  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:29.171914  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:29.672320  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:30.171688  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:30.671958  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:31.172481  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:31.672528  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:32.172583  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:32.672562  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:33.171839  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:33.672125  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:34.172515  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:34.672643  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:35.172469  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:35.672444  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:36.171897  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:36.672260  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:37.171900  995603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:25:37.332591  995603 kubeadm.go:1081] duration metric: took 15.140354535s to wait for elevateKubeSystemPrivileges.
	I0830 22:25:37.332635  995603 kubeadm.go:406] StartCluster complete in 6m2.391789918s
	I0830 22:25:37.332659  995603 settings.go:142] acquiring lock: {Name:mk86a33be631b0c488f84f735edc2475d02a32da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:25:37.332770  995603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:25:37.334722  995603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/kubeconfig: {Name:mk0526218ca029d07e1b4b6aabf66ceb463e1e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:25:37.334991  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:25:37.335087  995603 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:25:37.335217  995603 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335241  995603 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-250163"
	W0830 22:25:37.335253  995603 addons.go:240] addon storage-provisioner should already be in state true
	I0830 22:25:37.335313  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.335317  995603 config.go:182] Loaded profile config "old-k8s-version-250163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0830 22:25:37.335322  995603 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335342  995603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-250163"
	I0830 22:25:37.335345  995603 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-250163"
	I0830 22:25:37.335380  995603 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-250163"
	W0830 22:25:37.335391  995603 addons.go:240] addon metrics-server should already be in state true
	I0830 22:25:37.335440  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.335753  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335807  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.335807  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335847  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.335810  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.335939  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.355619  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I0830 22:25:37.355760  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43941
	I0830 22:25:37.355979  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0830 22:25:37.356166  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356203  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356595  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.356729  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.356748  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.356730  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.356793  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.357097  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.357114  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.357170  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357177  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357383  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.357486  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.357825  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.357857  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.358246  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.358292  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.373639  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I0830 22:25:37.374107  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.374639  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.374657  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.375035  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.375359  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.377439  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.379303  995603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:25:37.378176  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0830 22:25:37.380617  995603 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-250163"
	W0830 22:25:37.380661  995603 addons.go:240] addon default-storageclass should already be in state true
	I0830 22:25:37.380706  995603 host.go:66] Checking if "old-k8s-version-250163" exists ...
	I0830 22:25:37.380787  995603 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:25:37.380802  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:25:37.380826  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.381081  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.381123  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.381726  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.382284  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.382304  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.382656  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.382878  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.384791  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.387018  995603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0830 22:25:37.385098  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.385806  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.388841  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.388863  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.388865  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:25:37.388883  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:25:37.388907  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.389015  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.389121  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.389274  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.392059  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.392538  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.392557  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.392720  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.392861  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.392989  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.393101  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.399504  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I0830 22:25:37.399592  995603 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-250163" context rescaled to 1 replicas
	I0830 22:25:37.399627  995603 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:25:37.401322  995603 out.go:177] * Verifying Kubernetes components...
	I0830 22:25:37.400205  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.402915  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:25:37.403460  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.403485  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.403872  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.404488  995603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 22:25:37.404537  995603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 22:25:37.420598  995603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0830 22:25:37.421352  995603 main.go:141] libmachine: () Calling .GetVersion
	I0830 22:25:37.422218  995603 main.go:141] libmachine: Using API Version  1
	I0830 22:25:37.422240  995603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 22:25:37.422714  995603 main.go:141] libmachine: () Calling .GetMachineName
	I0830 22:25:37.422979  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetState
	I0830 22:25:37.424750  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .DriverName
	I0830 22:25:37.425396  995603 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:25:37.425415  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:25:37.425439  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHHostname
	I0830 22:25:37.428142  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.428731  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:25:c9", ip: ""} in network mk-old-k8s-version-250163: {Iface:virbr1 ExpiryTime:2023-08-30 23:19:18 +0000 UTC Type:0 Mac:52:54:00:ba:25:c9 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:old-k8s-version-250163 Clientid:01:52:54:00:ba:25:c9}
	I0830 22:25:37.428762  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | domain old-k8s-version-250163 has defined IP address 192.168.39.10 and MAC address 52:54:00:ba:25:c9 in network mk-old-k8s-version-250163
	I0830 22:25:37.428900  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHPort
	I0830 22:25:37.429077  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHKeyPath
	I0830 22:25:37.429330  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .GetSSHUsername
	I0830 22:25:37.429469  995603 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/old-k8s-version-250163/id_rsa Username:docker}
	I0830 22:25:37.705452  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:25:37.713345  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:25:37.736333  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:25:37.736356  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0830 22:25:37.825018  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:25:37.825051  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:25:37.858566  995603 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-250163" to be "Ready" ...
	I0830 22:25:37.858657  995603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:25:37.888050  995603 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:25:37.888082  995603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:25:37.901662  995603 node_ready.go:49] node "old-k8s-version-250163" has status "Ready":"True"
	I0830 22:25:37.901689  995603 node_ready.go:38] duration metric: took 43.090996ms waiting for node "old-k8s-version-250163" to be "Ready" ...
	I0830 22:25:37.901701  995603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:25:37.928785  995603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:37.960479  995603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:25:39.232573  995603 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mx7ff" not found
	I0830 22:25:39.232603  995603 pod_ready.go:81] duration metric: took 1.303781463s waiting for pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace to be "Ready" ...
	E0830 22:25:39.232616  995603 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-mx7ff" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mx7ff" not found
	I0830 22:25:39.232630  995603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:39.305932  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.600438988s)
	I0830 22:25:39.306003  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306018  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306031  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.592647384s)
	I0830 22:25:39.306084  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306106  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306088  995603 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.447402831s)
	I0830 22:25:39.306222  995603 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0830 22:25:39.306459  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306481  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306485  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306512  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306518  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306534  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306517  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306608  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306628  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.306638  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.306862  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306903  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306911  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306946  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.306972  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.306981  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.306993  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.307001  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.307338  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.307387  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.307407  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.425740  995603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.465201154s)
	I0830 22:25:39.425823  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.425844  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.426221  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.426260  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.426272  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.426289  995603 main.go:141] libmachine: Making call to close driver server
	I0830 22:25:39.426311  995603 main.go:141] libmachine: (old-k8s-version-250163) Calling .Close
	I0830 22:25:39.426584  995603 main.go:141] libmachine: (old-k8s-version-250163) DBG | Closing plugin on server side
	I0830 22:25:39.426620  995603 main.go:141] libmachine: Successfully made call to close driver server
	I0830 22:25:39.426638  995603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0830 22:25:39.426657  995603 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-250163"
	I0830 22:25:39.428628  995603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0830 22:25:39.430476  995603 addons.go:502] enable addons completed in 2.095405793s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0830 22:25:40.785067  995603 pod_ready.go:92] pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace has status "Ready":"True"
	I0830 22:25:40.785090  995603 pod_ready.go:81] duration metric: took 1.552452887s waiting for pod "coredns-5644d7b6d9-ntb45" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.785100  995603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-866k8" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.790132  995603 pod_ready.go:92] pod "kube-proxy-866k8" in "kube-system" namespace has status "Ready":"True"
	I0830 22:25:40.790158  995603 pod_ready.go:81] duration metric: took 5.051684ms waiting for pod "kube-proxy-866k8" in "kube-system" namespace to be "Ready" ...
	I0830 22:25:40.790173  995603 pod_ready.go:38] duration metric: took 2.888452893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:25:40.790199  995603 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:25:40.790247  995603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:25:40.805458  995603 api_server.go:72] duration metric: took 3.405792506s to wait for apiserver process to appear ...
	I0830 22:25:40.805488  995603 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:25:40.805512  995603 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0830 22:25:40.812389  995603 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0830 22:25:40.813455  995603 api_server.go:141] control plane version: v1.16.0
	I0830 22:25:40.813483  995603 api_server.go:131] duration metric: took 7.983448ms to wait for apiserver health ...
	I0830 22:25:40.813520  995603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:25:40.818720  995603 system_pods.go:59] 4 kube-system pods found
	I0830 22:25:40.818741  995603 system_pods.go:61] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:40.818746  995603 system_pods.go:61] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:40.818754  995603 system_pods.go:61] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:40.818763  995603 system_pods.go:61] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:40.818768  995603 system_pods.go:74] duration metric: took 5.239623ms to wait for pod list to return data ...
	I0830 22:25:40.818776  995603 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:25:40.821982  995603 default_sa.go:45] found service account: "default"
	I0830 22:25:40.822001  995603 default_sa.go:55] duration metric: took 3.215755ms for default service account to be created ...
	I0830 22:25:40.822010  995603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:25:40.824823  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:40.824844  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:40.824850  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:40.824860  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:40.824871  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:40.824896  995603 retry.go:31] will retry after 244.703972ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.075793  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.075829  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.075838  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.075849  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.075860  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.075886  995603 retry.go:31] will retry after 325.650304ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.407202  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.407234  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.407242  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.407252  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.407262  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.407313  995603 retry.go:31] will retry after 449.708915ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:41.862007  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:41.862038  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:41.862043  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:41.862061  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:41.862070  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0830 22:25:41.862086  995603 retry.go:31] will retry after 484.451835ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:42.351597  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:42.351637  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:42.351646  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:42.351656  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:42.351664  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:42.351680  995603 retry.go:31] will retry after 739.711019ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:43.096340  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:43.096365  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:43.096371  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:43.096380  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:43.096387  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:43.096402  995603 retry.go:31] will retry after 871.763135ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:43.974914  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:43.974947  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:43.974954  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:43.974964  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:43.974973  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:43.974994  995603 retry.go:31] will retry after 1.11275286s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:45.093268  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:45.093293  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:45.093299  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:45.093306  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:45.093313  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:45.093329  995603 retry.go:31] will retry after 1.015840649s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:46.114920  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:46.114954  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:46.114961  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:46.114972  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:46.114982  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:46.115002  995603 retry.go:31] will retry after 1.822388925s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:47.942838  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:47.942870  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:47.942877  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:47.942887  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:47.942900  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:47.942920  995603 retry.go:31] will retry after 1.516432463s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:49.464430  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:49.464460  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:49.464465  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:49.464473  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:49.464480  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:49.464496  995603 retry.go:31] will retry after 2.558675876s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:52.028440  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:52.028469  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:52.028474  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:52.028481  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:52.028488  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:52.028503  995603 retry.go:31] will retry after 2.801664105s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:54.835174  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:54.835200  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:54.835205  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:54.835212  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:54.835219  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:54.835243  995603 retry.go:31] will retry after 3.386411543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:25:58.228062  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:25:58.228104  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:25:58.228113  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:25:58.228123  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:25:58.228136  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:25:58.228158  995603 retry.go:31] will retry after 5.58749509s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:03.822486  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:26:03.822511  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:03.822516  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:03.822523  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:03.822530  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:03.822548  995603 retry.go:31] will retry after 6.26222599s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:10.092537  995603 system_pods.go:86] 4 kube-system pods found
	I0830 22:26:10.092563  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:10.092569  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:10.092576  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:10.092582  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:10.092599  995603 retry.go:31] will retry after 6.680813015s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:16.780093  995603 system_pods.go:86] 5 kube-system pods found
	I0830 22:26:16.780120  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:16.780125  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Pending
	I0830 22:26:16.780130  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:16.780138  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:16.780145  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:16.780161  995603 retry.go:31] will retry after 9.963152707s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0830 22:26:26.749177  995603 system_pods.go:86] 7 kube-system pods found
	I0830 22:26:26.749205  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:26.749211  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Running
	I0830 22:26:26.749215  995603 system_pods.go:89] "kube-controller-manager-old-k8s-version-250163" [dfb636c2-5a87-4d9a-97c0-2fd763d52b69] Running
	I0830 22:26:26.749219  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:26.749223  995603 system_pods.go:89] "kube-scheduler-old-k8s-version-250163" [9d0c93a7-5cad-4a40-9d3d-3b828e33dca9] Pending
	I0830 22:26:26.749230  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:26.749237  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:26.749252  995603 retry.go:31] will retry after 8.744971537s: missing components: etcd, kube-scheduler
	I0830 22:26:35.500731  995603 system_pods.go:86] 8 kube-system pods found
	I0830 22:26:35.500759  995603 system_pods.go:89] "coredns-5644d7b6d9-ntb45" [cc1efd40-7731-4f7c-9155-46f8af8b9883] Running
	I0830 22:26:35.500765  995603 system_pods.go:89] "etcd-old-k8s-version-250163" [260642d3-280e-4ae1-97bc-d15a904b3205] Running
	I0830 22:26:35.500769  995603 system_pods.go:89] "kube-apiserver-old-k8s-version-250163" [f06ae5fe-240d-4523-86f0-b3044ea45c4c] Running
	I0830 22:26:35.500775  995603 system_pods.go:89] "kube-controller-manager-old-k8s-version-250163" [dfb636c2-5a87-4d9a-97c0-2fd763d52b69] Running
	I0830 22:26:35.500779  995603 system_pods.go:89] "kube-proxy-866k8" [e0be4379-0283-4c7b-854d-755e28e9807d] Running
	I0830 22:26:35.500783  995603 system_pods.go:89] "kube-scheduler-old-k8s-version-250163" [9d0c93a7-5cad-4a40-9d3d-3b828e33dca9] Running
	I0830 22:26:35.500789  995603 system_pods.go:89] "metrics-server-74d5856cc6-h6bcw" [10b707f1-f4e0-43dc-bb3e-e16c405b4a27] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0830 22:26:35.500796  995603 system_pods.go:89] "storage-provisioner" [e3da9204-c5aa-44ce-9584-026334decc99] Running
	I0830 22:26:35.500813  995603 system_pods.go:126] duration metric: took 54.67879848s to wait for k8s-apps to be running ...
	I0830 22:26:35.500827  995603 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:26:35.500876  995603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:26:35.519861  995603 system_svc.go:56] duration metric: took 19.021631ms WaitForService to wait for kubelet.
	I0830 22:26:35.519900  995603 kubeadm.go:581] duration metric: took 58.120243521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:26:35.519985  995603 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:26:35.524455  995603 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0830 22:26:35.524486  995603 node_conditions.go:123] node cpu capacity is 2
	I0830 22:26:35.524537  995603 node_conditions.go:105] duration metric: took 4.543152ms to run NodePressure ...
	I0830 22:26:35.524550  995603 start.go:228] waiting for startup goroutines ...
	I0830 22:26:35.524562  995603 start.go:233] waiting for cluster config update ...
	I0830 22:26:35.524573  995603 start.go:242] writing updated cluster config ...
	I0830 22:26:35.524938  995603 ssh_runner.go:195] Run: rm -f paused
	I0830 22:26:35.578723  995603 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0830 22:26:35.580954  995603 out.go:177] 
	W0830 22:26:35.582332  995603 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0830 22:26:35.583700  995603 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0830 22:26:35.585290  995603 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-250163" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-08-30 22:19:17 UTC, ends at Wed 2023-08-30 22:37:46 UTC. --
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.707210871Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=fdf39186-8832-4081-86d4-f07cf8353e9d name=/runtime.v1alpha2.RuntimeService/Status
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.845722329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=159513d0-1023-40ee-81c3-05d55a2b335d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.845801342Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=159513d0-1023-40ee-81c3-05d55a2b335d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.846047315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=159513d0-1023-40ee-81c3-05d55a2b335d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.883483968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=866730f7-0596-4208-a90d-a48496b03b7c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.883548915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=866730f7-0596-4208-a90d-a48496b03b7c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.883762197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=866730f7-0596-4208-a90d-a48496b03b7c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.921587401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b65eee69-9422-4b81-9562-904f7442acbe name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.921649017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b65eee69-9422-4b81-9562-904f7442acbe name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.921836318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b65eee69-9422-4b81-9562-904f7442acbe name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.956459791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=08cf6d7d-c415-456e-a09a-7cac3f96600f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.956526048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=08cf6d7d-c415-456e-a09a-7cac3f96600f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.956705931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=08cf6d7d-c415-456e-a09a-7cac3f96600f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.995031040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6bb0a97d-f8e3-420e-b7ff-691d7e1bfb6e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.995119058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6bb0a97d-f8e3-420e-b7ff-691d7e1bfb6e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:45 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:45.995452970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6bb0a97d-f8e3-420e-b7ff-691d7e1bfb6e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:46 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:46.027430599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0c4e8d0d-0aac-4402-92f3-94c4726546e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:46 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:46.027558696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0c4e8d0d-0aac-4402-92f3-94c4726546e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:46 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:46.027738603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0c4e8d0d-0aac-4402-92f3-94c4726546e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:46 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:46.069418440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e7306a46-3d79-4fb0-8a31-4768977042d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:46 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:46.069511588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e7306a46-3d79-4fb0-8a31-4768977042d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:46 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:46.069665505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e7306a46-3d79-4fb0-8a31-4768977042d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:46 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:46.099125022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=38487fd3-b1d7-4fb9-b5eb-f1f98d080935 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:46 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:46.099219177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=38487fd3-b1d7-4fb9-b5eb-f1f98d080935 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 30 22:37:46 old-k8s-version-250163 crio[727]: time="2023-08-30 22:37:46.099477216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017,PodSandboxId:6cc013a0480f26ca10210b0810b2ea204ffa98dd730c105acfa12b04c2a2ea4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1693434340043203065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3da9204-c5aa-44ce-9584-026334decc99,},Annotations:map[string]string{io.kubernetes.container.hash: 6b52e210,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b,PodSandboxId:4f96a206b5b26750a5b4e314b225f61140da0565d77777f9606d408b1fddca1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1693434339066139289,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-866k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0be4379-0283-4c7b-854d-755e28e9807d,},Annotations:map[string]string{io.kubernetes.container.hash: 40b2f0be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24,PodSandboxId:e1d50f90c483517da6a1af298481b6a5b38ef4fa43d6a641ec78e2ba67775c6a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1693434338023030895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-ntb45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1efd40-7731-4f7c-9155-46f8af8b9883,},Annotations:map[string]string{io.kubernetes.container.hash: de5570,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae,PodSandboxId:9b942bb4cce909bbc3f6e2a1dddc5e80c724b2838617b003d53726f07013dd06,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1693434313247708413,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b7b8c507e9eb4df94fa032fc1138d46,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f9ef3536,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8,PodSandboxId:a85229d1e4c0e27ef23436de004576a5b18b0899865b495dd109f81c6482b264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1693434311970751968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{
io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0,PodSandboxId:fadf3ac872dc6c8ed704d0869849efda220282b6aaa7dad53e39a20ba9b7a5e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1693434311572420335,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea,PodSandboxId:f6a8673bc69b43d88aab7cc83e9b18edb250afac192c561543e3a76cf6ee5376,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1693434311487849591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-250163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed888ce8cac825a2a0220eb8f9d850d8,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bafb31c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=38487fd3-b1d7-4fb9-b5eb-f1f98d080935 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	9f802bfb55765       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   6cc013a0480f2
	e431d8a4958cb       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   4f96a206b5b26
	dc635d8d2b1fd       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   e1d50f90c4835
	89270eb7de796       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   12 minutes ago      Running             etcd                      0                   9b942bb4cce90
	f57bf075c4b20       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   12 minutes ago      Running             kube-scheduler            0                   a85229d1e4c0e
	39b4851cb8055       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   12 minutes ago      Running             kube-controller-manager   0                   fadf3ac872dc6
	920aba79a414a       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   12 minutes ago      Running             kube-apiserver            0                   f6a8673bc69b4
	
	* 
	* ==> coredns [dc635d8d2b1fdfc9f01f8eb53e43dc770f4152e8597fa22148fafc40179cfe24] <==
	* .:53
	2023-08-30T22:25:38.870Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-08-30T22:25:38.870Z [INFO] CoreDNS-1.6.2
	2023-08-30T22:25:38.870Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-08-30T22:26:11.240Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	2023-08-30T22:26:11.250Z [INFO] 127.0.0.1:57536 - 23550 "HINFO IN 1862811921159354688.5496499725226021289. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009990067s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-250163
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-250163
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=old-k8s-version-250163
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T22_25_22_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:37:17 +0000   Wed, 30 Aug 2023 22:25:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:37:17 +0000   Wed, 30 Aug 2023 22:25:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:37:17 +0000   Wed, 30 Aug 2023 22:25:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:37:17 +0000   Wed, 30 Aug 2023 22:25:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    old-k8s-version-250163
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 62beb253a8dc41729f656e941cb2e92f
	 System UUID:                62beb253-a8dc-4172-9f65-6e941cb2e92f
	 Boot ID:                    8af8a3e6-a3bd-4c5e-a24c-b628d1ae9309
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-ntb45                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                etcd-old-k8s-version-250163                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-apiserver-old-k8s-version-250163             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-250163    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-866k8                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-250163             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                metrics-server-74d5856cc6-h6bcw                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-250163     Node old-k8s-version-250163 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet, old-k8s-version-250163     Node old-k8s-version-250163 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet, old-k8s-version-250163     Node old-k8s-version-250163 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-250163  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug30 22:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075294] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.445741] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.478783] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155071] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.599133] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.294424] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.112730] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.209919] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.124730] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.254347] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +20.207259] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.405368] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug30 22:20] kauditd_printk_skb: 18 callbacks suppressed
	[Aug30 22:25] systemd-fstab-generator[3164]: Ignoring "noauto" for root device
	[  +0.723412] kauditd_printk_skb: 6 callbacks suppressed
	[Aug30 22:26] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [89270eb7de796aa4f7ae2173435d675c2975a217a6371175ff1ae4d3b405a9ae] <==
	* 2023-08-30 22:25:13.394268 I | raft: newRaft f8926bd555ec3d0e [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-08-30 22:25:13.394327 I | raft: f8926bd555ec3d0e became follower at term 1
	2023-08-30 22:25:13.402969 W | auth: simple token is not cryptographically signed
	2023-08-30 22:25:13.410882 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-08-30 22:25:13.412058 I | etcdserver: f8926bd555ec3d0e as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-08-30 22:25:13.412395 I | etcdserver/membership: added member f8926bd555ec3d0e [https://192.168.39.10:2380] to cluster 3a710b3f69152e32
	2023-08-30 22:25:13.414026 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-30 22:25:13.414410 I | embed: listening for metrics on http://192.168.39.10:2381
	2023-08-30 22:25:13.414617 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-30 22:25:13.794762 I | raft: f8926bd555ec3d0e is starting a new election at term 1
	2023-08-30 22:25:13.794851 I | raft: f8926bd555ec3d0e became candidate at term 2
	2023-08-30 22:25:13.794877 I | raft: f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 2
	2023-08-30 22:25:13.795056 I | raft: f8926bd555ec3d0e became leader at term 2
	2023-08-30 22:25:13.795180 I | raft: raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 2
	2023-08-30 22:25:13.795745 I | etcdserver: published {Name:old-k8s-version-250163 ClientURLs:[https://192.168.39.10:2379]} to cluster 3a710b3f69152e32
	2023-08-30 22:25:13.795790 I | embed: ready to serve client requests
	2023-08-30 22:25:13.796178 I | etcdserver: setting up the initial cluster version to 3.3
	2023-08-30 22:25:13.796277 I | embed: ready to serve client requests
	2023-08-30 22:25:13.797588 I | embed: serving client requests on 192.168.39.10:2379
	2023-08-30 22:25:13.797740 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-30 22:25:13.809534 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-08-30 22:25:13.809636 I | etcdserver/api: enabled capabilities for version 3.3
	2023-08-30 22:25:38.807571 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (143.159377ms) to execute
	2023-08-30 22:35:13.831432 I | mvcc: store.index: compact 666
	2023-08-30 22:35:13.833301 I | mvcc: finished scheduled compaction at 666 (took 1.478049ms)
	
	* 
	* ==> kernel <==
	*  22:37:46 up 18 min,  0 users,  load average: 0.40, 0.20, 0.17
	Linux old-k8s-version-250163 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [920aba79a414a8f2abe29e67d73d4d70deaf554713614ae79ff443f69a0504ea] <==
	* I0830 22:30:17.984631       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0830 22:30:17.984743       1 handler_proxy.go:99] no RequestInfo found in the context
	E0830 22:30:17.984798       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:30:17.984806       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:31:17.985281       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0830 22:31:17.985540       1 handler_proxy.go:99] no RequestInfo found in the context
	E0830 22:31:17.985598       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:31:17.985619       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:33:17.986015       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0830 22:33:17.986164       1 handler_proxy.go:99] no RequestInfo found in the context
	E0830 22:33:17.986222       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:33:17.986241       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:35:17.986821       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0830 22:35:17.987065       1 handler_proxy.go:99] no RequestInfo found in the context
	E0830 22:35:17.987141       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:35:17.987149       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 22:36:17.987521       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0830 22:36:17.987727       1 handler_proxy.go:99] no RequestInfo found in the context
	E0830 22:36:17.987854       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:36:17.987887       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [39b4851cb80552f0e07a26ecb1370663790182d59a19cfd10a1c6283e013deb0] <==
	* W0830 22:31:30.102784       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:31:40.028460       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:32:02.104768       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:32:10.280492       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:32:34.107270       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:32:40.532724       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:33:06.109154       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:33:10.788005       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:33:38.111225       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:33:41.040814       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:34:10.113181       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:34:11.293155       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0830 22:34:41.551119       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:34:42.115267       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:35:11.803277       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:35:14.117036       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:35:42.055299       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:35:46.118889       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:36:12.307656       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:36:18.120634       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:36:42.559608       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:36:50.122403       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:37:12.811982       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0830 22:37:22.124588       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0830 22:37:43.064296       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [e431d8a4958cb00414c3148067dd0fe5dcfe5358fb4f2fc2be4d3f1914c6e68b] <==
	* W0830 22:25:39.532741       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0830 22:25:39.542057       1 node.go:135] Successfully retrieved node IP: 192.168.39.10
	I0830 22:25:39.542086       1 server_others.go:149] Using iptables Proxier.
	I0830 22:25:39.542546       1 server.go:529] Version: v1.16.0
	I0830 22:25:39.552279       1 config.go:313] Starting service config controller
	I0830 22:25:39.552497       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0830 22:25:39.552653       1 config.go:131] Starting endpoints config controller
	I0830 22:25:39.552760       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0830 22:25:39.652843       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0830 22:25:39.653283       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f57bf075c4b208621fba7417909d91e8f819164b6b6d02c85ae522f895ee0fe8] <==
	* W0830 22:25:17.018003       1 authentication.go:79] Authentication is disabled
	I0830 22:25:17.018028       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0830 22:25:17.018468       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0830 22:25:17.056751       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 22:25:17.058179       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 22:25:17.058318       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 22:25:17.060687       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 22:25:17.060876       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 22:25:17.061185       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 22:25:17.063272       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 22:25:17.063343       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 22:25:17.063375       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 22:25:17.068657       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 22:25:17.070819       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 22:25:18.058342       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 22:25:18.059758       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 22:25:18.060674       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 22:25:18.062000       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 22:25:18.064494       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 22:25:18.069331       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 22:25:18.069978       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 22:25:18.071137       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 22:25:18.073391       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 22:25:18.074566       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 22:25:18.076009       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-30 22:19:17 UTC, ends at Wed 2023-08-30 22:37:46 UTC. --
	Aug 30 22:33:08 old-k8s-version-250163 kubelet[3170]: E0830 22:33:08.457534    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:33:22 old-k8s-version-250163 kubelet[3170]: E0830 22:33:22.457191    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:33:37 old-k8s-version-250163 kubelet[3170]: E0830 22:33:37.457714    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:33:51 old-k8s-version-250163 kubelet[3170]: E0830 22:33:51.456997    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:34:04 old-k8s-version-250163 kubelet[3170]: E0830 22:34:04.458035    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:34:19 old-k8s-version-250163 kubelet[3170]: E0830 22:34:19.457980    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:34:33 old-k8s-version-250163 kubelet[3170]: E0830 22:34:33.458149    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:34:45 old-k8s-version-250163 kubelet[3170]: E0830 22:34:45.457460    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:34:56 old-k8s-version-250163 kubelet[3170]: E0830 22:34:56.457787    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:35:10 old-k8s-version-250163 kubelet[3170]: E0830 22:35:10.541857    3170 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Aug 30 22:35:11 old-k8s-version-250163 kubelet[3170]: E0830 22:35:11.456835    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:35:25 old-k8s-version-250163 kubelet[3170]: E0830 22:35:25.457021    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:35:40 old-k8s-version-250163 kubelet[3170]: E0830 22:35:40.457695    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:35:53 old-k8s-version-250163 kubelet[3170]: E0830 22:35:53.457403    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:36:07 old-k8s-version-250163 kubelet[3170]: E0830 22:36:07.456818    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:36:21 old-k8s-version-250163 kubelet[3170]: E0830 22:36:21.457764    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:36:34 old-k8s-version-250163 kubelet[3170]: E0830 22:36:34.481199    3170 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 30 22:36:34 old-k8s-version-250163 kubelet[3170]: E0830 22:36:34.481307    3170 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 30 22:36:34 old-k8s-version-250163 kubelet[3170]: E0830 22:36:34.481400    3170 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 30 22:36:34 old-k8s-version-250163 kubelet[3170]: E0830 22:36:34.481443    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Aug 30 22:36:47 old-k8s-version-250163 kubelet[3170]: E0830 22:36:47.457376    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:37:02 old-k8s-version-250163 kubelet[3170]: E0830 22:37:02.459629    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:37:13 old-k8s-version-250163 kubelet[3170]: E0830 22:37:13.457212    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:37:27 old-k8s-version-250163 kubelet[3170]: E0830 22:37:27.457242    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 30 22:37:38 old-k8s-version-250163 kubelet[3170]: E0830 22:37:38.457513    3170 pod_workers.go:191] Error syncing pod 10b707f1-f4e0-43dc-bb3e-e16c405b4a27 ("metrics-server-74d5856cc6-h6bcw_kube-system(10b707f1-f4e0-43dc-bb3e-e16c405b4a27)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [9f802bfb55765f13506ac03fa39844c01e0d54327dc8c2377c106e3c939c4017] <==
	* I0830 22:25:40.143108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 22:25:40.163233       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 22:25:40.163498       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 22:25:40.173805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 22:25:40.174786       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-250163_9155072c-a4d7-4e6f-9b3d-499b40e038a6!
	I0830 22:25:40.176941       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfc689f5-cdce-4b1f-82e2-4c32d1ad584d", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-250163_9155072c-a4d7-4e6f-9b3d-499b40e038a6 became leader
	I0830 22:25:40.275060       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-250163_9155072c-a4d7-4e6f-9b3d-499b40e038a6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-250163 -n old-k8s-version-250163
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-250163 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-h6bcw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-250163 describe pod metrics-server-74d5856cc6-h6bcw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-250163 describe pod metrics-server-74d5856cc6-h6bcw: exit status 1 (74.794286ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-h6bcw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-250163 describe pod metrics-server-74d5856cc6-h6bcw: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (128.85s)

                                                
                                    

Test pass (225/288)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.66
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.1/json-events 4.96
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.59
20 TestOffline 109.57
22 TestAddons/Setup 142.44
24 TestAddons/parallel/Registry 17.42
26 TestAddons/parallel/InspektorGadget 12.08
27 TestAddons/parallel/MetricsServer 6.09
28 TestAddons/parallel/HelmTiller 16.94
30 TestAddons/parallel/CSI 53.74
31 TestAddons/parallel/Headlamp 16.27
32 TestAddons/parallel/CloudSpanner 5.98
35 TestAddons/serial/GCPAuth/Namespaces 0.14
37 TestCertOptions 47.87
38 TestCertExpiration 293.87
40 TestForceSystemdFlag 60.57
41 TestForceSystemdEnv 70.18
43 TestKVMDriverInstallOrUpdate 1.35
47 TestErrorSpam/setup 47.91
48 TestErrorSpam/start 0.38
49 TestErrorSpam/status 0.81
50 TestErrorSpam/pause 1.54
51 TestErrorSpam/unpause 1.6
52 TestErrorSpam/stop 2.26
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 97.39
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 52.19
59 TestFunctional/serial/KubeContext 0.05
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
64 TestFunctional/serial/CacheCmd/cache/add_local 1.09
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
69 TestFunctional/serial/CacheCmd/cache/delete 0.12
70 TestFunctional/serial/MinikubeKubectlCmd 0.12
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
72 TestFunctional/serial/ExtraConfig 39.12
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.34
75 TestFunctional/serial/LogsFileCmd 1.37
76 TestFunctional/serial/InvalidService 4.14
78 TestFunctional/parallel/ConfigCmd 0.42
79 TestFunctional/parallel/DashboardCmd 14.51
80 TestFunctional/parallel/DryRun 0.3
81 TestFunctional/parallel/InternationalLanguage 0.19
82 TestFunctional/parallel/StatusCmd 0.85
86 TestFunctional/parallel/ServiceCmdConnect 26.51
87 TestFunctional/parallel/AddonsCmd 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 50.49
90 TestFunctional/parallel/SSHCmd 0.42
91 TestFunctional/parallel/CpCmd 0.93
92 TestFunctional/parallel/MySQL 25.88
93 TestFunctional/parallel/FileSync 0.23
94 TestFunctional/parallel/CertSync 1.43
98 TestFunctional/parallel/NodeLabels 0.09
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
102 TestFunctional/parallel/License 0.17
103 TestFunctional/parallel/Version/short 0.28
104 TestFunctional/parallel/Version/components 1.05
105 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
107 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
117 TestFunctional/parallel/MountCmd/any-port 26.09
118 TestFunctional/parallel/ServiceCmd/DeployApp 9.53
119 TestFunctional/parallel/MountCmd/specific-port 1.83
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
121 TestFunctional/parallel/ProfileCmd/profile_list 0.28
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
123 TestFunctional/parallel/MountCmd/VerifyCleanup 1.38
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
128 TestFunctional/parallel/ImageCommands/ImageBuild 2.66
129 TestFunctional/parallel/ImageCommands/Setup 0.92
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.27
131 TestFunctional/parallel/ServiceCmd/List 1.4
132 TestFunctional/parallel/ServiceCmd/JSONOutput 1.33
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.97
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
135 TestFunctional/parallel/ServiceCmd/Format 0.5
136 TestFunctional/parallel/ServiceCmd/URL 0.41
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.35
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.18
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.43
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.55
142 TestFunctional/delete_addon-resizer_images 0.07
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestIngressAddonLegacy/StartLegacyK8sCluster 84.64
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.47
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.6
155 TestJSONOutput/start/Command 90.9
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.65
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.63
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 7.1
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.22
183 TestMainNoArgs 0.06
184 TestMinikubeProfile 97.16
187 TestMountStart/serial/StartWithMountFirst 29.48
188 TestMountStart/serial/VerifyMountFirst 0.41
189 TestMountStart/serial/StartWithMountSecond 25.42
190 TestMountStart/serial/VerifyMountSecond 0.4
191 TestMountStart/serial/DeleteFirst 0.89
192 TestMountStart/serial/VerifyMountPostDelete 0.41
193 TestMountStart/serial/Stop 1.2
194 TestMountStart/serial/RestartStopped 23.67
195 TestMountStart/serial/VerifyMountPostStop 0.41
198 TestMultiNode/serial/FreshStart2Nodes 114.81
199 TestMultiNode/serial/DeployApp2Nodes 4.9
201 TestMultiNode/serial/AddNode 41.41
202 TestMultiNode/serial/ProfileList 0.22
203 TestMultiNode/serial/CopyFile 7.75
204 TestMultiNode/serial/StopNode 2.98
205 TestMultiNode/serial/StartAfterStop 29.18
207 TestMultiNode/serial/DeleteNode 1.78
209 TestMultiNode/serial/RestartMultiNode 447.55
210 TestMultiNode/serial/ValidateNameConflict 49.39
217 TestScheduledStopUnix 117.92
223 TestKubernetesUpgrade 199.58
227 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
234 TestNoKubernetes/serial/StartWithK8s 104.09
235 TestNoKubernetes/serial/StartWithStopK8s 6.86
236 TestStoppedBinaryUpgrade/Setup 0.29
238 TestNoKubernetes/serial/Start 50.76
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
240 TestNoKubernetes/serial/ProfileList 1.12
241 TestNoKubernetes/serial/Stop 1.4
242 TestNoKubernetes/serial/StartNoArgs 26.26
244 TestPause/serial/Start 88.85
245 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
253 TestNetworkPlugins/group/false 4.05
259 TestStartStop/group/old-k8s-version/serial/FirstStart 393.75
261 TestStartStop/group/no-preload/serial/FirstStart 163.87
262 TestStoppedBinaryUpgrade/MinikubeLogs 0.5
264 TestStartStop/group/embed-certs/serial/FirstStart 125.41
266 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 100.54
267 TestStartStop/group/no-preload/serial/DeployApp 8.55
268 TestStartStop/group/embed-certs/serial/DeployApp 8.88
269 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.53
271 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.44
273 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.42
274 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
278 TestStartStop/group/no-preload/serial/SecondStart 666.77
280 TestStartStop/group/old-k8s-version/serial/DeployApp 8.43
281 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.84
284 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 572.72
286 TestStartStop/group/old-k8s-version/serial/SecondStart 576.19
296 TestStartStop/group/newest-cni/serial/FirstStart 62.99
297 TestNetworkPlugins/group/auto/Start 103.46
298 TestStartStop/group/newest-cni/serial/DeployApp 0
299 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.49
300 TestStartStop/group/newest-cni/serial/Stop 12.13
301 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
302 TestStartStop/group/newest-cni/serial/SecondStart 55.32
303 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
304 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
305 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
306 TestStartStop/group/newest-cni/serial/Pause 2.86
307 TestNetworkPlugins/group/kindnet/Start 69.38
308 TestNetworkPlugins/group/calico/Start 119.98
309 TestNetworkPlugins/group/auto/KubeletFlags 0.24
310 TestNetworkPlugins/group/auto/NetCatPod 12.36
311 TestNetworkPlugins/group/auto/DNS 0.19
312 TestNetworkPlugins/group/auto/Localhost 0.15
313 TestNetworkPlugins/group/auto/HairPin 0.16
314 TestNetworkPlugins/group/custom-flannel/Start 99.38
315 TestNetworkPlugins/group/enable-default-cni/Start 136.4
316 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
317 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
318 TestNetworkPlugins/group/kindnet/NetCatPod 11.41
319 TestNetworkPlugins/group/kindnet/DNS 0.2
320 TestNetworkPlugins/group/kindnet/Localhost 0.18
321 TestNetworkPlugins/group/kindnet/HairPin 0.22
322 TestNetworkPlugins/group/flannel/Start 86.29
323 TestNetworkPlugins/group/calico/ControllerPod 5.04
324 TestNetworkPlugins/group/calico/KubeletFlags 0.24
325 TestNetworkPlugins/group/calico/NetCatPod 12.41
326 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
327 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.45
328 TestNetworkPlugins/group/calico/DNS 0.22
329 TestNetworkPlugins/group/calico/Localhost 0.21
330 TestNetworkPlugins/group/calico/HairPin 0.39
331 TestNetworkPlugins/group/custom-flannel/DNS 0.21
332 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
333 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
334 TestNetworkPlugins/group/bridge/Start 63.58
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.43
337 TestNetworkPlugins/group/flannel/ControllerPod 5.4
338 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
339 TestNetworkPlugins/group/flannel/NetCatPod 11.46
340 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
341 TestNetworkPlugins/group/enable-default-cni/Localhost 0.49
342 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
343 TestNetworkPlugins/group/flannel/DNS 0.19
344 TestNetworkPlugins/group/flannel/Localhost 0.18
345 TestNetworkPlugins/group/flannel/HairPin 0.16
346 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
347 TestNetworkPlugins/group/bridge/NetCatPod 12.36
348 TestNetworkPlugins/group/bridge/DNS 0.19
349 TestNetworkPlugins/group/bridge/Localhost 0.18
350 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (10.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-953651 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-953651 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.655962048s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-953651
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-953651: exit status 85 (72.189688ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-953651 | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC |          |
	|         | -p download-only-953651        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:09:17
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:09:17.917314  962633 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:09:17.917471  962633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:09:17.917480  962633 out.go:309] Setting ErrFile to fd 2...
	I0830 21:09:17.917484  962633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:09:17.917687  962633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	W0830 21:09:17.917803  962633 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17114-955377/.minikube/config/config.json: open /home/jenkins/minikube-integration/17114-955377/.minikube/config/config.json: no such file or directory
	I0830 21:09:17.918408  962633 out.go:303] Setting JSON to true
	I0830 21:09:17.919375  962633 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10305,"bootTime":1693419453,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 21:09:17.919455  962633 start.go:138] virtualization: kvm guest
	I0830 21:09:17.922140  962633 out.go:97] [download-only-953651] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 21:09:17.923676  962633 out.go:169] MINIKUBE_LOCATION=17114
	W0830 21:09:17.922239  962633 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball: no such file or directory
	I0830 21:09:17.922284  962633 notify.go:220] Checking for updates...
	I0830 21:09:17.926357  962633 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:09:17.927738  962633 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:09:17.929159  962633 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:09:17.930437  962633 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0830 21:09:17.932880  962633 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0830 21:09:17.933208  962633 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:09:17.964780  962633 out.go:97] Using the kvm2 driver based on user configuration
	I0830 21:09:17.964803  962633 start.go:298] selected driver: kvm2
	I0830 21:09:17.964809  962633 start.go:902] validating driver "kvm2" against <nil>
	I0830 21:09:17.965219  962633 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:09:17.965338  962633 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17114-955377/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0830 21:09:17.981158  962633 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0830 21:09:17.981204  962633 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 21:09:17.981687  962633 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0830 21:09:17.981859  962633 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0830 21:09:17.981902  962633 cni.go:84] Creating CNI manager for ""
	I0830 21:09:17.981912  962633 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0830 21:09:17.981920  962633 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0830 21:09:17.981931  962633 start_flags.go:319] config:
	{Name:download-only-953651 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-953651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:09:17.982233  962633 iso.go:125] acquiring lock: {Name:mk46910f853d17f11045ef5235e32ef2f2012eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:09:17.984111  962633 out.go:97] Downloading VM boot image ...
	I0830 21:09:17.984156  962633 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0830 21:09:20.676181  962633 out.go:97] Starting control plane node download-only-953651 in cluster download-only-953651
	I0830 21:09:20.676202  962633 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 21:09:20.703529  962633 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0830 21:09:20.703572  962633 cache.go:57] Caching tarball of preloaded images
	I0830 21:09:20.703749  962633 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 21:09:20.705658  962633 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0830 21:09:20.705673  962633 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0830 21:09:20.734465  962633 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0830 21:09:24.140311  962633 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0830 21:09:24.140404  962633 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17114-955377/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0830 21:09:24.991840  962633 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0830 21:09:24.992179  962633 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/download-only-953651/config.json ...
	I0830 21:09:24.992208  962633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/download-only-953651/config.json: {Name:mk2ac6d379e17d735fe46c9afda0420259f34ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:09:24.992361  962633 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 21:09:24.992529  962633 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17114-955377/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-953651"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (4.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-953651 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-953651 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.954897525s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (4.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-953651
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-953651: exit status 85 (73.252178ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-953651 | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC |          |
	|         | -p download-only-953651        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-953651 | jenkins | v1.31.2 | 30 Aug 23 21:09 UTC |          |
	|         | -p download-only-953651        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:09:28
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:09:28.647673  962700 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:09:28.647833  962700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:09:28.647842  962700 out.go:309] Setting ErrFile to fd 2...
	I0830 21:09:28.647847  962700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:09:28.648044  962700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	W0830 21:09:28.648151  962700 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17114-955377/.minikube/config/config.json: open /home/jenkins/minikube-integration/17114-955377/.minikube/config/config.json: no such file or directory
	I0830 21:09:28.648658  962700 out.go:303] Setting JSON to true
	I0830 21:09:28.649701  962700 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10316,"bootTime":1693419453,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 21:09:28.649764  962700 start.go:138] virtualization: kvm guest
	I0830 21:09:28.652000  962700 out.go:97] [download-only-953651] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 21:09:28.653578  962700 out.go:169] MINIKUBE_LOCATION=17114
	I0830 21:09:28.652185  962700 notify.go:220] Checking for updates...
	I0830 21:09:28.656284  962700 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:09:28.657751  962700 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:09:28.659279  962700 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:09:28.660587  962700 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-953651"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-953651
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-727468 --alsologtostderr --binary-mirror http://127.0.0.1:39809 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-727468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-727468
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (109.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-097159 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-097159 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m47.593613043s)
helpers_test.go:175: Cleaning up "offline-crio-097159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-097159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-097159: (1.976112024s)
--- PASS: TestOffline (109.57s)

                                                
                                    
x
+
TestAddons/Setup (142.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-585092 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-585092 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m22.440956748s)
--- PASS: TestAddons/Setup (142.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 24.199676ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kqslj" [2d5d3cd0-8bb5-4b94-b187-679fcd34e3a8] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.032915176s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5t4kg" [c0624397-887d-4175-a301-884300862c9a] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014566324s
addons_test.go:316: (dbg) Run:  kubectl --context addons-585092 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-585092 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-585092 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.158341099s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 ip
2023/08/30 21:12:13 [DEBUG] GET http://192.168.39.136:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 addons disable registry --alsologtostderr -v=1
addons_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p addons-585092 addons disable registry --alsologtostderr -v=1: (1.021291193s)
--- PASS: TestAddons/parallel/Registry (17.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pq6ph" [f065eccc-c324-41d1-9f61-eec685a6589a] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.031729372s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-585092
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-585092: (7.046906978s)
--- PASS: TestAddons/parallel/InspektorGadget (12.08s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.09s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 22.650793ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-pflsn" [0c4abf13-24b6-428f-9a0d-20153eaee786] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.027044128s
addons_test.go:391: (dbg) Run:  kubectl --context addons-585092 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.09s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 22.698737ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-2qb8z" [62fd46fc-44ab-42a7-92d2-e780114685b9] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.034257797s
addons_test.go:449: (dbg) Run:  kubectl --context addons-585092 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-585092 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.406332331s)
addons_test.go:454: kubectl --context addons-585092 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:449: (dbg) Run:  kubectl --context addons-585092 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-585092 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.249107199s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (16.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 21.929617ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-585092 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-585092 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [039eaccc-cb2b-489a-b19f-34c749adbfa3] Pending
helpers_test.go:344: "task-pv-pod" [039eaccc-cb2b-489a-b19f-34c749adbfa3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [039eaccc-cb2b-489a-b19f-34c749adbfa3] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.028403295s
addons_test.go:560: (dbg) Run:  kubectl --context addons-585092 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-585092 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-585092 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-585092 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-585092 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-585092 delete pod task-pv-pod: (1.219874273s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-585092 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-585092 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-585092 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5b1a7bb6-346d-4cec-b22a-70ea97e0106e] Pending
helpers_test.go:344: "task-pv-pod-restore" [5b1a7bb6-346d-4cec-b22a-70ea97e0106e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5b1a7bb6-346d-4cec-b22a-70ea97e0106e] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.023396311s
addons_test.go:602: (dbg) Run:  kubectl --context addons-585092 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-585092 delete pod task-pv-pod-restore: (1.384361254s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-585092 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-585092 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-585092 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.779469923s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-585092 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-585092 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-585092 --alsologtostderr -v=1: (2.23993571s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-wwzgb" [735bb5c3-56c4-40c9-86e6-be64f2008fb3] Pending
helpers_test.go:344: "headlamp-699c48fb74-wwzgb" [735bb5c3-56c4-40c9-86e6-be64f2008fb3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-wwzgb" [735bb5c3-56c4-40c9-86e6-be64f2008fb3] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.027048063s
--- PASS: TestAddons/parallel/Headlamp (16.27s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dcc56475c-sg78n" [7bf08707-b2f8-4181-b1be-f0db85caaf46] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.02962963s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-585092
--- PASS: TestAddons/parallel/CloudSpanner (5.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-585092 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-585092 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestCertOptions (47.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-519738 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-519738 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (46.324847015s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-519738 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-519738 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-519738 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-519738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-519738
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-519738: (1.042312522s)
--- PASS: TestCertOptions (47.87s)

                                                
                                    
x
+
TestCertExpiration (293.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-693390 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-693390 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m13.49981298s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-693390 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-693390 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.325067719s)
helpers_test.go:175: Cleaning up "cert-expiration-693390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-693390
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-693390: (1.047492072s)
--- PASS: TestCertExpiration (293.87s)

                                                
                                    
x
+
TestForceSystemdFlag (60.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-882278 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0830 22:06:49.715450  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 22:06:57.076345  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-882278 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.322550185s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-882278 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-882278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-882278
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-882278: (1.031560022s)
--- PASS: TestForceSystemdFlag (60.57s)

                                                
                                    
x
+
TestForceSystemdEnv (70.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-134135 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-134135 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.120164877s)
helpers_test.go:175: Cleaning up "force-systemd-env-134135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-134135
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-134135: (1.056183434s)
--- PASS: TestForceSystemdEnv (70.18s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.35s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.35s)

                                                
                                    
x
+
TestErrorSpam/setup (47.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-379496 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-379496 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-379496 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-379496 --driver=kvm2  --container-runtime=crio: (47.90674029s)
--- PASS: TestErrorSpam/setup (47.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 stop: (2.09440289s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-379496 --log_dir /tmp/nospam-379496 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17114-955377/.minikube/files/etc/test/nested/copy/962621/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944257 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-944257 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m37.389557823s)
--- PASS: TestFunctional/serial/StartWithProxy (97.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (52.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944257 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-944257 --alsologtostderr -v=8: (52.192694838s)
functional_test.go:659: soft start took 52.19344847s for "functional-944257" cluster.
--- PASS: TestFunctional/serial/SoftStart (52.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-944257 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 cache add registry.k8s.io/pause:3.1: (1.020661068s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 cache add registry.k8s.io/pause:3.3: (1.105436616s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 cache add registry.k8s.io/pause:latest: (1.071627673s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-944257 /tmp/TestFunctionalserialCacheCmdcacheadd_local3300663235/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 cache add minikube-local-cache-test:functional-944257
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 cache delete minikube-local-cache-test:functional-944257
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-944257
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944257 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.384801ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 kubectl -- --context functional-944257 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-944257 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944257 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-944257 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.119147155s)
functional_test.go:757: restart took 39.119260516s for "functional-944257" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-944257 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 logs: (1.34216862s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 logs --file /tmp/TestFunctionalserialLogsFileCmd3081435828/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 logs --file /tmp/TestFunctionalserialLogsFileCmd3081435828/001/logs.txt: (1.365485087s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-944257 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-944257
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-944257: exit status 115 (296.688417ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.42:31319 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-944257 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944257 config get cpus: exit status 14 (74.375067ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944257 config get cpus: exit status 14 (61.219862ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-944257 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-944257 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 970156: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944257 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-944257 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (153.586701ms)

                                                
                                                
-- stdout --
	* [functional-944257] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 21:22:20.432997  969985 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:22:20.433145  969985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:22:20.433153  969985 out.go:309] Setting ErrFile to fd 2...
	I0830 21:22:20.433158  969985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:22:20.433347  969985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 21:22:20.433863  969985 out.go:303] Setting JSON to false
	I0830 21:22:20.434937  969985 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11088,"bootTime":1693419453,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 21:22:20.435000  969985 start.go:138] virtualization: kvm guest
	I0830 21:22:20.437130  969985 out.go:177] * [functional-944257] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 21:22:20.438963  969985 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 21:22:20.440310  969985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:22:20.439032  969985 notify.go:220] Checking for updates...
	I0830 21:22:20.441750  969985 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:22:20.443203  969985 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:22:20.444550  969985 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 21:22:20.445923  969985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:22:20.447530  969985 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:22:20.447998  969985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:22:20.448053  969985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:22:20.462997  969985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0830 21:22:20.463411  969985 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:22:20.464002  969985 main.go:141] libmachine: Using API Version  1
	I0830 21:22:20.464025  969985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:22:20.464349  969985 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:22:20.464507  969985 main.go:141] libmachine: (functional-944257) Calling .DriverName
	I0830 21:22:20.464763  969985 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:22:20.465084  969985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:22:20.465123  969985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:22:20.479401  969985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45053
	I0830 21:22:20.479835  969985 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:22:20.480382  969985 main.go:141] libmachine: Using API Version  1
	I0830 21:22:20.480414  969985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:22:20.480787  969985 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:22:20.480978  969985 main.go:141] libmachine: (functional-944257) Calling .DriverName
	I0830 21:22:20.515394  969985 out.go:177] * Using the kvm2 driver based on existing profile
	I0830 21:22:20.516850  969985 start.go:298] selected driver: kvm2
	I0830 21:22:20.516870  969985 start.go:902] validating driver "kvm2" against &{Name:functional-944257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-944257 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.42 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:22:20.517006  969985 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:22:20.519603  969985 out.go:177] 
	W0830 21:22:20.521087  969985 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0830 21:22:20.522586  969985 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944257 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-944257 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-944257 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (187.530489ms)

                                                
                                                
-- stdout --
	* [functional-944257] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 21:22:19.394808  969721 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:22:19.394928  969721 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:22:19.394936  969721 out.go:309] Setting ErrFile to fd 2...
	I0830 21:22:19.394941  969721 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:22:19.395218  969721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 21:22:19.395818  969721 out.go:303] Setting JSON to false
	I0830 21:22:19.396832  969721 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11087,"bootTime":1693419453,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 21:22:19.396889  969721 start.go:138] virtualization: kvm guest
	I0830 21:22:19.399456  969721 out.go:177] * [functional-944257] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0830 21:22:19.401961  969721 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 21:22:19.401905  969721 notify.go:220] Checking for updates...
	I0830 21:22:19.403470  969721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:22:19.405290  969721 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 21:22:19.407834  969721 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 21:22:19.409485  969721 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 21:22:19.411380  969721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:22:19.413229  969721 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:22:19.413576  969721 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:22:19.413642  969721 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:22:19.435127  969721 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0830 21:22:19.435720  969721 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:22:19.436642  969721 main.go:141] libmachine: Using API Version  1
	I0830 21:22:19.436663  969721 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:22:19.436992  969721 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:22:19.437137  969721 main.go:141] libmachine: (functional-944257) Calling .DriverName
	I0830 21:22:19.437355  969721 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:22:19.437798  969721 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:22:19.437839  969721 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:22:19.457286  969721 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40129
	I0830 21:22:19.458122  969721 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:22:19.458745  969721 main.go:141] libmachine: Using API Version  1
	I0830 21:22:19.458769  969721 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:22:19.459357  969721 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:22:19.459576  969721 main.go:141] libmachine: (functional-944257) Calling .DriverName
	I0830 21:22:19.509876  969721 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0830 21:22:19.511744  969721 start.go:298] selected driver: kvm2
	I0830 21:22:19.511764  969721 start.go:902] validating driver "kvm2" against &{Name:functional-944257 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-944257 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.42 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:22:19.511946  969721 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:22:19.515191  969721 out.go:177] 
	W0830 21:22:19.516985  969721 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0830 21:22:19.518607  969721 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-944257 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-944257 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-rhfnl" [55be25f1-9723-4adc-8cee-a71068fb8ca5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-rhfnl" [55be25f1-9723-4adc-8cee-a71068fb8ca5] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 26.012482285s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.42:32677
functional_test.go:1674: http://192.168.50.42:32677: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-rhfnl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.42:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.42:32677
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (26.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [01cb94a6-5393-4739-b279-35fe20fa4fed] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.023984081s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-944257 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-944257 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-944257 get pvc myclaim -o=json
E0830 21:21:57.076182  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:21:57.082225  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:21:57.092575  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:21:57.112933  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:21:57.153316  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:21:57.233748  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:21:57.394859  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-944257 get pvc myclaim -o=json
E0830 21:21:57.715516  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-944257 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5f031c34-b17b-460b-838e-67d979a80c03] Pending
E0830 21:21:58.356411  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [5f031c34-b17b-460b-838e-67d979a80c03] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0830 21:21:59.636929  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:22:02.197382  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [5f031c34-b17b-460b-838e-67d979a80c03] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.025501848s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-944257 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-944257 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-944257 delete -f testdata/storage-provisioner/pod.yaml: (2.18196662s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-944257 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8ab47d18-bf4a-4f49-b343-693b36775eef] Pending
helpers_test.go:344: "sp-pod" [8ab47d18-bf4a-4f49-b343-693b36775eef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8ab47d18-bf4a-4f49-b343-693b36775eef] Running
2023/08/30 21:22:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.050808243s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-944257 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh -n functional-944257 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 cp functional-944257:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2563054400/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh -n functional-944257 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-944257 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-b6jl7" [6f1b24d8-7484-491c-aea2-adbcecd30730] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-b6jl7" [6f1b24d8-7484-491c-aea2-adbcecd30730] Running
E0830 21:22:07.317783  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.02193335s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-944257 exec mysql-859648c796-b6jl7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-944257 exec mysql-859648c796-b6jl7 -- mysql -ppassword -e "show databases;": exit status 1 (480.542599ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-944257 exec mysql-859648c796-b6jl7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-944257 exec mysql-859648c796-b6jl7 -- mysql -ppassword -e "show databases;": exit status 1 (305.957443ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-944257 exec mysql-859648c796-b6jl7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/962621/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo cat /etc/test/nested/copy/962621/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/962621.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo cat /etc/ssl/certs/962621.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/962621.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo cat /usr/share/ca-certificates/962621.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/9626212.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo cat /etc/ssl/certs/9626212.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/9626212.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo cat /usr/share/ca-certificates/9626212.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-944257 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944257 ssh "sudo systemctl is-active docker": exit status 1 (228.356815ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944257 ssh "sudo systemctl is-active containerd": exit status 1 (236.730981ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 version -o=json --components: (1.048920862s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (26.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdany-port1645504833/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1693430511482167666" to /tmp/TestFunctionalparallelMountCmdany-port1645504833/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1693430511482167666" to /tmp/TestFunctionalparallelMountCmdany-port1645504833/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1693430511482167666" to /tmp/TestFunctionalparallelMountCmdany-port1645504833/001/test-1693430511482167666
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.800636ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 30 21:21 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 30 21:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 30 21:21 test-1693430511482167666
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh cat /mount-9p/test-1693430511482167666
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-944257 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [261c0765-1f01-4a4c-a8db-2032a8203ca4] Pending
helpers_test.go:344: "busybox-mount" [261c0765-1f01-4a4c-a8db-2032a8203ca4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [261c0765-1f01-4a4c-a8db-2032a8203ca4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [261c0765-1f01-4a4c-a8db-2032a8203ca4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 23.111570831s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-944257 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdany-port1645504833/001:/mount-9p --alsologtostderr -v=1] ...
E0830 21:22:17.558355  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/any-port (26.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-944257 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-944257 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-4x8k9" [5c2fb72c-3c11-4721-8d26-11b33f785ea3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-4x8k9" [5c2fb72c-3c11-4721-8d26-11b33f785ea3] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.027566058s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdspecific-port2902536984/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (388.258387ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdspecific-port2902536984/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944257 ssh "sudo umount -f /mount-9p": exit status 1 (201.052922ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-944257 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdspecific-port2902536984/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "216.233356ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "65.443488ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "210.502415ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "59.589337ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230606945/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230606945/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230606945/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T" /mount1: exit status 1 (271.327212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-944257 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230606945/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230606945/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-944257 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230606945/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944257 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-944257
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-944257
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944257 image ls --format short --alsologtostderr:
I0830 21:22:41.054961  970832 out.go:296] Setting OutFile to fd 1 ...
I0830 21:22:41.055110  970832 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:22:41.055119  970832 out.go:309] Setting ErrFile to fd 2...
I0830 21:22:41.055124  970832 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:22:41.055333  970832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
I0830 21:22:41.055945  970832 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:22:41.056049  970832 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:22:41.056455  970832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0830 21:22:41.056516  970832 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 21:22:41.073533  970832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
I0830 21:22:41.074223  970832 main.go:141] libmachine: () Calling .GetVersion
I0830 21:22:41.074960  970832 main.go:141] libmachine: Using API Version  1
I0830 21:22:41.074986  970832 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 21:22:41.075379  970832 main.go:141] libmachine: () Calling .GetMachineName
I0830 21:22:41.075591  970832 main.go:141] libmachine: (functional-944257) Calling .GetState
I0830 21:22:41.077724  970832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0830 21:22:41.077759  970832 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 21:22:41.095213  970832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
I0830 21:22:41.095670  970832 main.go:141] libmachine: () Calling .GetVersion
I0830 21:22:41.096436  970832 main.go:141] libmachine: Using API Version  1
I0830 21:22:41.096463  970832 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 21:22:41.096821  970832 main.go:141] libmachine: () Calling .GetMachineName
I0830 21:22:41.097083  970832 main.go:141] libmachine: (functional-944257) Calling .DriverName
I0830 21:22:41.097287  970832 ssh_runner.go:195] Run: systemctl --version
I0830 21:22:41.097317  970832 main.go:141] libmachine: (functional-944257) Calling .GetSSHHostname
I0830 21:22:41.100509  970832 main.go:141] libmachine: (functional-944257) DBG | domain functional-944257 has defined MAC address 52:54:00:f5:8a:26 in network mk-functional-944257
I0830 21:22:41.100999  970832 main.go:141] libmachine: (functional-944257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:8a:26", ip: ""} in network mk-functional-944257: {Iface:virbr1 ExpiryTime:2023-08-30 22:18:43 +0000 UTC Type:0 Mac:52:54:00:f5:8a:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:functional-944257 Clientid:01:52:54:00:f5:8a:26}
I0830 21:22:41.101034  970832 main.go:141] libmachine: (functional-944257) DBG | domain functional-944257 has defined IP address 192.168.50.42 and MAC address 52:54:00:f5:8a:26 in network mk-functional-944257
I0830 21:22:41.101210  970832 main.go:141] libmachine: (functional-944257) Calling .GetSSHPort
I0830 21:22:41.101356  970832 main.go:141] libmachine: (functional-944257) Calling .GetSSHKeyPath
I0830 21:22:41.101482  970832 main.go:141] libmachine: (functional-944257) Calling .GetSSHUsername
I0830 21:22:41.101584  970832 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/functional-944257/id_rsa Username:docker}
I0830 21:22:41.197781  970832 ssh_runner.go:195] Run: sudo crictl images --output json
I0830 21:22:41.261346  970832 main.go:141] libmachine: Making call to close driver server
I0830 21:22:41.261364  970832 main.go:141] libmachine: (functional-944257) Calling .Close
I0830 21:22:41.261676  970832 main.go:141] libmachine: Successfully made call to close driver server
I0830 21:22:41.261702  970832 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 21:22:41.261719  970832 main.go:141] libmachine: Making call to close driver server
I0830 21:22:41.261736  970832 main.go:141] libmachine: (functional-944257) Calling .Close
I0830 21:22:41.261955  970832 main.go:141] libmachine: Successfully made call to close driver server
I0830 21:22:41.261972  970832 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944257 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-scheduler          | v1.28.1            | b462ce0c8b1ff | 61.5MB |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| registry.k8s.io/kube-controller-manager | v1.28.1            | 821b3dfea27be | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.1            | 6cdbabde3874e | 74.7MB |
| localhost/minikube-local-cache-test     | functional-944257  | fb13b4b6ee89d | 3.35kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.1            | 5c801295c21d0 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | eea7b3dcba7ee | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-944257  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944257 image ls --format table --alsologtostderr:
I0830 21:22:41.318338  970929 out.go:296] Setting OutFile to fd 1 ...
I0830 21:22:41.318475  970929 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:22:41.318486  970929 out.go:309] Setting ErrFile to fd 2...
I0830 21:22:41.318494  970929 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:22:41.318700  970929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
I0830 21:22:41.319302  970929 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:22:41.319445  970929 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:22:41.320035  970929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0830 21:22:41.320114  970929 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 21:22:41.334770  970929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
I0830 21:22:41.335311  970929 main.go:141] libmachine: () Calling .GetVersion
I0830 21:22:41.335994  970929 main.go:141] libmachine: Using API Version  1
I0830 21:22:41.336021  970929 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 21:22:41.336368  970929 main.go:141] libmachine: () Calling .GetMachineName
I0830 21:22:41.336558  970929 main.go:141] libmachine: (functional-944257) Calling .GetState
I0830 21:22:41.338750  970929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0830 21:22:41.338802  970929 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 21:22:41.352448  970929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
I0830 21:22:41.352861  970929 main.go:141] libmachine: () Calling .GetVersion
I0830 21:22:41.353346  970929 main.go:141] libmachine: Using API Version  1
I0830 21:22:41.353372  970929 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 21:22:41.353720  970929 main.go:141] libmachine: () Calling .GetMachineName
I0830 21:22:41.353910  970929 main.go:141] libmachine: (functional-944257) Calling .DriverName
I0830 21:22:41.354101  970929 ssh_runner.go:195] Run: systemctl --version
I0830 21:22:41.354136  970929 main.go:141] libmachine: (functional-944257) Calling .GetSSHHostname
I0830 21:22:41.357029  970929 main.go:141] libmachine: (functional-944257) DBG | domain functional-944257 has defined MAC address 52:54:00:f5:8a:26 in network mk-functional-944257
I0830 21:22:41.357405  970929 main.go:141] libmachine: (functional-944257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:8a:26", ip: ""} in network mk-functional-944257: {Iface:virbr1 ExpiryTime:2023-08-30 22:18:43 +0000 UTC Type:0 Mac:52:54:00:f5:8a:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:functional-944257 Clientid:01:52:54:00:f5:8a:26}
I0830 21:22:41.357442  970929 main.go:141] libmachine: (functional-944257) DBG | domain functional-944257 has defined IP address 192.168.50.42 and MAC address 52:54:00:f5:8a:26 in network mk-functional-944257
I0830 21:22:41.357651  970929 main.go:141] libmachine: (functional-944257) Calling .GetSSHPort
I0830 21:22:41.357824  970929 main.go:141] libmachine: (functional-944257) Calling .GetSSHKeyPath
I0830 21:22:41.357948  970929 main.go:141] libmachine: (functional-944257) Calling .GetSSHUsername
I0830 21:22:41.358079  970929 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/functional-944257/id_rsa Username:docker}
I0830 21:22:41.451021  970929 ssh_runner.go:195] Run: sudo crictl images --output json
I0830 21:22:41.487382  970929 main.go:141] libmachine: Making call to close driver server
I0830 21:22:41.487405  970929 main.go:141] libmachine: (functional-944257) Calling .Close
I0830 21:22:41.487683  970929 main.go:141] libmachine: Successfully made call to close driver server
I0830 21:22:41.487704  970929 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 21:22:41.487720  970929 main.go:141] libmachine: Making call to close driver server
I0830 21:22:41.487728  970929 main.go:141] libmachine: (functional-944257) Calling .Close
I0830 21:22:41.488026  970929 main.go:141] libmachine: Successfully made call to close driver server
I0830 21:22:41.488046  970929 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 21:22:41.488068  970929 main.go:141] libmachine: (functional-944257) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944257 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830","registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"123163446"},{"id":"6cdbabde38
74e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":["registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3","registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"74680215"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba08
0558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"fb13b4b6ee89d822698bb2a459231dcf2ce4fc5a1541bc6160fb2a5d428a0da2","repoDigests":["localhost/minikube-local-cache-test@sha256:d0d8ed9d50906d557a2794d0b065fea5c0d4925c90d655a5d4c53c0f7541f9fb"],"repoTags":["localhost/minikube-local-cache-test:functional-944257"],"size":"3345"},{"id":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":["registr
y.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774","registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"126972880"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069a
df654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-944257"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d
994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24","repoDigests":["docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c","docker.io/library/nginx@sha256:48a84a0728cab8ac558f48796f901f6d31d287101bc8b317683678125e0d2d35"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820092"},{"id":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4","registry.k8s.io/kube-scheduler@sha256:7
e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"61477686"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944257 image ls --format json --alsologtostderr:
I0830 21:22:41.062446  970834 out.go:296] Setting OutFile to fd 1 ...
I0830 21:22:41.062588  970834 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:22:41.062598  970834 out.go:309] Setting ErrFile to fd 2...
I0830 21:22:41.062605  970834 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:22:41.062813  970834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
I0830 21:22:41.063486  970834 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:22:41.063638  970834 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:22:41.064180  970834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0830 21:22:41.064256  970834 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 21:22:41.081200  970834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46677
I0830 21:22:41.081691  970834 main.go:141] libmachine: () Calling .GetVersion
I0830 21:22:41.082342  970834 main.go:141] libmachine: Using API Version  1
I0830 21:22:41.082368  970834 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 21:22:41.083016  970834 main.go:141] libmachine: () Calling .GetMachineName
I0830 21:22:41.083244  970834 main.go:141] libmachine: (functional-944257) Calling .GetState
I0830 21:22:41.085842  970834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0830 21:22:41.085884  970834 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 21:22:41.102938  970834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33405
I0830 21:22:41.103326  970834 main.go:141] libmachine: () Calling .GetVersion
I0830 21:22:41.103818  970834 main.go:141] libmachine: Using API Version  1
I0830 21:22:41.103835  970834 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 21:22:41.104376  970834 main.go:141] libmachine: () Calling .GetMachineName
I0830 21:22:41.104558  970834 main.go:141] libmachine: (functional-944257) Calling .DriverName
I0830 21:22:41.104763  970834 ssh_runner.go:195] Run: systemctl --version
I0830 21:22:41.104788  970834 main.go:141] libmachine: (functional-944257) Calling .GetSSHHostname
I0830 21:22:41.107544  970834 main.go:141] libmachine: (functional-944257) DBG | domain functional-944257 has defined MAC address 52:54:00:f5:8a:26 in network mk-functional-944257
I0830 21:22:41.107820  970834 main.go:141] libmachine: (functional-944257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:8a:26", ip: ""} in network mk-functional-944257: {Iface:virbr1 ExpiryTime:2023-08-30 22:18:43 +0000 UTC Type:0 Mac:52:54:00:f5:8a:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:functional-944257 Clientid:01:52:54:00:f5:8a:26}
I0830 21:22:41.107855  970834 main.go:141] libmachine: (functional-944257) DBG | domain functional-944257 has defined IP address 192.168.50.42 and MAC address 52:54:00:f5:8a:26 in network mk-functional-944257
I0830 21:22:41.107975  970834 main.go:141] libmachine: (functional-944257) Calling .GetSSHPort
I0830 21:22:41.108137  970834 main.go:141] libmachine: (functional-944257) Calling .GetSSHKeyPath
I0830 21:22:41.108296  970834 main.go:141] libmachine: (functional-944257) Calling .GetSSHUsername
I0830 21:22:41.108408  970834 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/functional-944257/id_rsa Username:docker}
I0830 21:22:41.215173  970834 ssh_runner.go:195] Run: sudo crictl images --output json
I0830 21:22:41.296395  970834 main.go:141] libmachine: Making call to close driver server
I0830 21:22:41.296417  970834 main.go:141] libmachine: (functional-944257) Calling .Close
I0830 21:22:41.296690  970834 main.go:141] libmachine: Successfully made call to close driver server
I0830 21:22:41.296704  970834 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 21:22:41.296718  970834 main.go:141] libmachine: Making call to close driver server
I0830 21:22:41.296727  970834 main.go:141] libmachine: (functional-944257) Calling .Close
I0830 21:22:41.298262  970834 main.go:141] libmachine: Successfully made call to close driver server
I0830 21:22:41.298277  970834 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944257 image ls --format yaml --alsologtostderr:
- id: fb13b4b6ee89d822698bb2a459231dcf2ce4fc5a1541bc6160fb2a5d428a0da2
repoDigests:
- localhost/minikube-local-cache-test@sha256:d0d8ed9d50906d557a2794d0b065fea5c0d4925c90d655a5d4c53c0f7541f9fb
repoTags:
- localhost/minikube-local-cache-test:functional-944257
size: "3345"
- id: 6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "74680215"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "123163446"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24
repoDigests:
- docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c
- docker.io/library/nginx@sha256:48a84a0728cab8ac558f48796f901f6d31d287101bc8b317683678125e0d2d35
repoTags:
- docker.io/library/nginx:latest
size: "190820092"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "126972880"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-944257
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
- registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "61477686"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944257 image ls --format yaml --alsologtostderr:
I0830 21:22:41.060931  970833 out.go:296] Setting OutFile to fd 1 ...
I0830 21:22:41.061067  970833 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:22:41.061079  970833 out.go:309] Setting ErrFile to fd 2...
I0830 21:22:41.061086  970833 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:22:41.061364  970833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
I0830 21:22:41.061961  970833 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:22:41.062061  970833 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:22:41.062398  970833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0830 21:22:41.062467  970833 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 21:22:41.077395  970833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
I0830 21:22:41.077941  970833 main.go:141] libmachine: () Calling .GetVersion
I0830 21:22:41.078912  970833 main.go:141] libmachine: Using API Version  1
I0830 21:22:41.078945  970833 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 21:22:41.079395  970833 main.go:141] libmachine: () Calling .GetMachineName
I0830 21:22:41.079654  970833 main.go:141] libmachine: (functional-944257) Calling .GetState
I0830 21:22:41.081907  970833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0830 21:22:41.081964  970833 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 21:22:41.098219  970833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
I0830 21:22:41.098664  970833 main.go:141] libmachine: () Calling .GetVersion
I0830 21:22:41.099249  970833 main.go:141] libmachine: Using API Version  1
I0830 21:22:41.099268  970833 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 21:22:41.099698  970833 main.go:141] libmachine: () Calling .GetMachineName
I0830 21:22:41.099949  970833 main.go:141] libmachine: (functional-944257) Calling .DriverName
I0830 21:22:41.100153  970833 ssh_runner.go:195] Run: systemctl --version
I0830 21:22:41.100180  970833 main.go:141] libmachine: (functional-944257) Calling .GetSSHHostname
I0830 21:22:41.104081  970833 main.go:141] libmachine: (functional-944257) DBG | domain functional-944257 has defined MAC address 52:54:00:f5:8a:26 in network mk-functional-944257
I0830 21:22:41.104685  970833 main.go:141] libmachine: (functional-944257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:8a:26", ip: ""} in network mk-functional-944257: {Iface:virbr1 ExpiryTime:2023-08-30 22:18:43 +0000 UTC Type:0 Mac:52:54:00:f5:8a:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:functional-944257 Clientid:01:52:54:00:f5:8a:26}
I0830 21:22:41.104726  970833 main.go:141] libmachine: (functional-944257) DBG | domain functional-944257 has defined IP address 192.168.50.42 and MAC address 52:54:00:f5:8a:26 in network mk-functional-944257
I0830 21:22:41.104889  970833 main.go:141] libmachine: (functional-944257) Calling .GetSSHPort
I0830 21:22:41.105043  970833 main.go:141] libmachine: (functional-944257) Calling .GetSSHKeyPath
I0830 21:22:41.105276  970833 main.go:141] libmachine: (functional-944257) Calling .GetSSHUsername
I0830 21:22:41.105443  970833 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/functional-944257/id_rsa Username:docker}
I0830 21:22:41.191017  970833 ssh_runner.go:195] Run: sudo crictl images --output json
I0830 21:22:41.240062  970833 main.go:141] libmachine: Making call to close driver server
I0830 21:22:41.240076  970833 main.go:141] libmachine: (functional-944257) Calling .Close
I0830 21:22:41.240496  970833 main.go:141] libmachine: Successfully made call to close driver server
I0830 21:22:41.240533  970833 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 21:22:41.240548  970833 main.go:141] libmachine: Making call to close driver server
I0830 21:22:41.240559  970833 main.go:141] libmachine: (functional-944257) Calling .Close
I0830 21:22:41.240496  970833 main.go:141] libmachine: (functional-944257) DBG | Closing plugin on server side
I0830 21:22:41.240845  970833 main.go:141] libmachine: (functional-944257) DBG | Closing plugin on server side
I0830 21:22:41.240953  970833 main.go:141] libmachine: Successfully made call to close driver server
I0830 21:22:41.240995  970833 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-944257 ssh pgrep buildkitd: exit status 1 (263.0375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image build -t localhost/my-image:functional-944257 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 image build -t localhost/my-image:functional-944257 testdata/build --alsologtostderr: (2.177774148s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-944257 image build -t localhost/my-image:functional-944257 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4f26854d5f9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-944257
--> 6be6ca1ff26
Successfully tagged localhost/my-image:functional-944257
6be6ca1ff2637b9cf9a698b257c882cab8ea13f5afff139f940578ef7732d11d
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-944257 image build -t localhost/my-image:functional-944257 testdata/build --alsologtostderr:
I0830 21:22:41.317701  970930 out.go:296] Setting OutFile to fd 1 ...
I0830 21:22:41.317849  970930 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:22:41.317859  970930 out.go:309] Setting ErrFile to fd 2...
I0830 21:22:41.317864  970930 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:22:41.318089  970930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
I0830 21:22:41.318862  970930 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:22:41.319593  970930 config.go:182] Loaded profile config "functional-944257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:22:41.320175  970930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0830 21:22:41.320227  970930 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 21:22:41.334701  970930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33137
I0830 21:22:41.335197  970930 main.go:141] libmachine: () Calling .GetVersion
I0830 21:22:41.335914  970930 main.go:141] libmachine: Using API Version  1
I0830 21:22:41.335940  970930 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 21:22:41.336301  970930 main.go:141] libmachine: () Calling .GetMachineName
I0830 21:22:41.336503  970930 main.go:141] libmachine: (functional-944257) Calling .GetState
I0830 21:22:41.338606  970930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0830 21:22:41.338652  970930 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 21:22:41.352658  970930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40095
I0830 21:22:41.353077  970930 main.go:141] libmachine: () Calling .GetVersion
I0830 21:22:41.353551  970930 main.go:141] libmachine: Using API Version  1
I0830 21:22:41.353570  970930 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 21:22:41.353952  970930 main.go:141] libmachine: () Calling .GetMachineName
I0830 21:22:41.354168  970930 main.go:141] libmachine: (functional-944257) Calling .DriverName
I0830 21:22:41.354372  970930 ssh_runner.go:195] Run: systemctl --version
I0830 21:22:41.354406  970930 main.go:141] libmachine: (functional-944257) Calling .GetSSHHostname
I0830 21:22:41.357281  970930 main.go:141] libmachine: (functional-944257) DBG | domain functional-944257 has defined MAC address 52:54:00:f5:8a:26 in network mk-functional-944257
I0830 21:22:41.357725  970930 main.go:141] libmachine: (functional-944257) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:8a:26", ip: ""} in network mk-functional-944257: {Iface:virbr1 ExpiryTime:2023-08-30 22:18:43 +0000 UTC Type:0 Mac:52:54:00:f5:8a:26 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:functional-944257 Clientid:01:52:54:00:f5:8a:26}
I0830 21:22:41.357747  970930 main.go:141] libmachine: (functional-944257) DBG | domain functional-944257 has defined IP address 192.168.50.42 and MAC address 52:54:00:f5:8a:26 in network mk-functional-944257
I0830 21:22:41.357825  970930 main.go:141] libmachine: (functional-944257) Calling .GetSSHPort
I0830 21:22:41.358010  970930 main.go:141] libmachine: (functional-944257) Calling .GetSSHKeyPath
I0830 21:22:41.358155  970930 main.go:141] libmachine: (functional-944257) Calling .GetSSHUsername
I0830 21:22:41.358300  970930 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/functional-944257/id_rsa Username:docker}
I0830 21:22:41.450197  970930 build_images.go:151] Building image from path: /tmp/build.191911705.tar
I0830 21:22:41.450271  970930 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0830 21:22:41.461878  970930 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.191911705.tar
I0830 21:22:41.472322  970930 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.191911705.tar: stat -c "%s %y" /var/lib/minikube/build/build.191911705.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.191911705.tar': No such file or directory
I0830 21:22:41.472355  970930 ssh_runner.go:362] scp /tmp/build.191911705.tar --> /var/lib/minikube/build/build.191911705.tar (3072 bytes)
I0830 21:22:41.510222  970930 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.191911705
I0830 21:22:41.520263  970930 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.191911705 -xf /var/lib/minikube/build/build.191911705.tar
I0830 21:22:41.531209  970930 crio.go:297] Building image: /var/lib/minikube/build/build.191911705
I0830 21:22:41.531320  970930 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-944257 /var/lib/minikube/build/build.191911705 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0830 21:22:43.394178  970930 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-944257 /var/lib/minikube/build/build.191911705 --cgroup-manager=cgroupfs: (1.862814331s)
I0830 21:22:43.394278  970930 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.191911705
I0830 21:22:43.406962  970930 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.191911705.tar
I0830 21:22:43.416796  970930 build_images.go:207] Built localhost/my-image:functional-944257 from /tmp/build.191911705.tar
I0830 21:22:43.416828  970930 build_images.go:123] succeeded building to: functional-944257
I0830 21:22:43.416832  970930 build_images.go:124] failed building to: 
I0830 21:22:43.416901  970930 main.go:141] libmachine: Making call to close driver server
I0830 21:22:43.416918  970930 main.go:141] libmachine: (functional-944257) Calling .Close
I0830 21:22:43.417183  970930 main.go:141] libmachine: Successfully made call to close driver server
I0830 21:22:43.417205  970930 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 21:22:43.417216  970930 main.go:141] libmachine: Making call to close driver server
I0830 21:22:43.417241  970930 main.go:141] libmachine: (functional-944257) DBG | Closing plugin on server side
I0830 21:22:43.417276  970930 main.go:141] libmachine: (functional-944257) Calling .Close
I0830 21:22:43.417585  970930 main.go:141] libmachine: (functional-944257) DBG | Closing plugin on server side
I0830 21:22:43.417586  970930 main.go:141] libmachine: Successfully made call to close driver server
I0830 21:22:43.417610  970930 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-944257
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image load --daemon gcr.io/google-containers/addon-resizer:functional-944257 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 image load --daemon gcr.io/google-containers/addon-resizer:functional-944257 --alsologtostderr: (4.938892307s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 service list: (1.403187804s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 service list -o json: (1.327795612s)
functional_test.go:1493: Took "1.32790202s" to run "out/minikube-linux-amd64 -p functional-944257 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image load --daemon gcr.io/google-containers/addon-resizer:functional-944257 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 image load --daemon gcr.io/google-containers/addon-resizer:functional-944257 --alsologtostderr: (3.613990647s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.42:31085
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.42:31085
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-944257
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image load --daemon gcr.io/google-containers/addon-resizer:functional-944257 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 image load --daemon gcr.io/google-containers/addon-resizer:functional-944257 --alsologtostderr: (4.202608627s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image save gcr.io/google-containers/addon-resizer:functional-944257 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 image save gcr.io/google-containers/addon-resizer:functional-944257 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.183038004s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image rm gcr.io/google-containers/addon-resizer:functional-944257 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
E0830 21:22:38.039329  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.187585421s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-944257
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-944257 image save --daemon gcr.io/google-containers/addon-resizer:functional-944257 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-944257 image save --daemon gcr.io/google-containers/addon-resizer:functional-944257 --alsologtostderr: (1.518115945s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-944257
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-944257
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-944257
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-944257
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (84.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-306023 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0830 21:23:19.000392  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-306023 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.639152723s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (84.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306023 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-306023 addons enable ingress --alsologtostderr -v=5: (12.469199534s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-306023 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-491526 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0830 21:27:10.198017  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:27:24.762710  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:27:30.678248  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:28:11.640227  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-491526 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m30.901031028s)
--- PASS: TestJSONOutput/start/Command (90.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-491526 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-491526 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-491526 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-491526 --output=json --user=testUser: (7.103922779s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-540343 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-540343 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.269321ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"983b7250-8b06-4606-9ebf-34e7a21c3c3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-540343] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"452e288b-60f9-4a44-90fd-19dc4e4c9711","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17114"}}
	{"specversion":"1.0","id":"b3db2e66-1b7b-417b-a41d-54a738185b3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"48fdf51d-b898-4283-8580-97bdf8801784","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig"}}
	{"specversion":"1.0","id":"df51697a-6618-4984-bdda-1fa028dbb31b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube"}}
	{"specversion":"1.0","id":"02d349b5-00f6-4f8a-848d-17cb65fe251a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"74494a04-7e61-4c5c-87e6-0bdfdbbefdcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9c6b698a-eaa3-4434-a744-deb139532e96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-540343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-540343
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-422454 --driver=kvm2  --container-runtime=crio
E0830 21:29:22.736795  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:22.742120  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:22.752400  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:22.772689  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:22.812953  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:22.893292  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:23.053760  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:23.374081  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:24.015140  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:25.295692  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:27.855940  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:32.976707  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:29:33.560458  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-422454 --driver=kvm2  --container-runtime=crio: (46.761740474s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-424897 --driver=kvm2  --container-runtime=crio
E0830 21:29:43.217743  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:30:03.698282  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-424897 --driver=kvm2  --container-runtime=crio: (47.451981326s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-422454
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-424897
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-424897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-424897
helpers_test.go:175: Cleaning up "first-422454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-422454
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-422454: (1.033675513s)
--- PASS: TestMinikubeProfile (97.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-549945 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0830 21:30:44.659092  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-549945 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.480137187s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-549945 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-549945 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-579889 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-579889 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.416006031s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579889 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579889 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-549945 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579889 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579889 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-579889
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-579889: (1.197830421s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-579889
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-579889: (22.667614458s)
--- PASS: TestMountStart/serial/RestartStopped (23.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579889 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-579889 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-752665 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0830 21:31:49.715546  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:31:57.076901  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:32:06.581522  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:32:17.400995  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-752665 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.369257074s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-752665 -- rollout status deployment/busybox: (3.043468229s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-j4rx4 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-mzmpx -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-j4rx4 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-mzmpx -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-j4rx4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-752665 -- exec busybox-5bc68d56bd-mzmpx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-752665 -v 3 --alsologtostderr
E0830 21:34:22.734517  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-752665 -v 3 --alsologtostderr: (40.8078552s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.41s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp testdata/cp-test.txt multinode-752665:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp multinode-752665:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile458377608/001/cp-test_multinode-752665.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp multinode-752665:/home/docker/cp-test.txt multinode-752665-m02:/home/docker/cp-test_multinode-752665_multinode-752665-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m02 "sudo cat /home/docker/cp-test_multinode-752665_multinode-752665-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp multinode-752665:/home/docker/cp-test.txt multinode-752665-m03:/home/docker/cp-test_multinode-752665_multinode-752665-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m03 "sudo cat /home/docker/cp-test_multinode-752665_multinode-752665-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp testdata/cp-test.txt multinode-752665-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp multinode-752665-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile458377608/001/cp-test_multinode-752665-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp multinode-752665-m02:/home/docker/cp-test.txt multinode-752665:/home/docker/cp-test_multinode-752665-m02_multinode-752665.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665 "sudo cat /home/docker/cp-test_multinode-752665-m02_multinode-752665.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp multinode-752665-m02:/home/docker/cp-test.txt multinode-752665-m03:/home/docker/cp-test_multinode-752665-m02_multinode-752665-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m03 "sudo cat /home/docker/cp-test_multinode-752665-m02_multinode-752665-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp testdata/cp-test.txt multinode-752665-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp multinode-752665-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile458377608/001/cp-test_multinode-752665-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp multinode-752665-m03:/home/docker/cp-test.txt multinode-752665:/home/docker/cp-test_multinode-752665-m03_multinode-752665.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665 "sudo cat /home/docker/cp-test_multinode-752665-m03_multinode-752665.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 cp multinode-752665-m03:/home/docker/cp-test.txt multinode-752665-m02:/home/docker/cp-test_multinode-752665-m03_multinode-752665-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 ssh -n multinode-752665-m02 "sudo cat /home/docker/cp-test_multinode-752665-m03_multinode-752665-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-752665 node stop m03: (2.095916992s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-752665 status: exit status 7 (442.041075ms)

                                                
                                                
-- stdout --
	multinode-752665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-752665-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-752665-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-752665 status --alsologtostderr: exit status 7 (439.846746ms)

                                                
                                                
-- stdout --
	multinode-752665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-752665-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-752665-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 21:34:42.321804  977701 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:34:42.321920  977701 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:34:42.321929  977701 out.go:309] Setting ErrFile to fd 2...
	I0830 21:34:42.321933  977701 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:34:42.322156  977701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 21:34:42.322330  977701 out.go:303] Setting JSON to false
	I0830 21:34:42.322381  977701 mustload.go:65] Loading cluster: multinode-752665
	I0830 21:34:42.322492  977701 notify.go:220] Checking for updates...
	I0830 21:34:42.322938  977701 config.go:182] Loaded profile config "multinode-752665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:34:42.322958  977701 status.go:255] checking status of multinode-752665 ...
	I0830 21:34:42.323408  977701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:34:42.323479  977701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:34:42.343745  977701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43043
	I0830 21:34:42.344171  977701 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:34:42.344696  977701 main.go:141] libmachine: Using API Version  1
	I0830 21:34:42.344720  977701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:34:42.345033  977701 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:34:42.345218  977701 main.go:141] libmachine: (multinode-752665) Calling .GetState
	I0830 21:34:42.346563  977701 status.go:330] multinode-752665 host status = "Running" (err=<nil>)
	I0830 21:34:42.346580  977701 host.go:66] Checking if "multinode-752665" exists ...
	I0830 21:34:42.346863  977701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:34:42.346904  977701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:34:42.361905  977701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0830 21:34:42.362316  977701 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:34:42.362827  977701 main.go:141] libmachine: Using API Version  1
	I0830 21:34:42.362848  977701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:34:42.363168  977701 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:34:42.363368  977701 main.go:141] libmachine: (multinode-752665) Calling .GetIP
	I0830 21:34:42.365827  977701 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:34:42.366207  977701 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:34:42.366236  977701 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:34:42.366358  977701 host.go:66] Checking if "multinode-752665" exists ...
	I0830 21:34:42.366797  977701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:34:42.366848  977701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:34:42.381132  977701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39821
	I0830 21:34:42.381481  977701 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:34:42.381876  977701 main.go:141] libmachine: Using API Version  1
	I0830 21:34:42.381896  977701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:34:42.382262  977701 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:34:42.382432  977701 main.go:141] libmachine: (multinode-752665) Calling .DriverName
	I0830 21:34:42.382623  977701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 21:34:42.382665  977701 main.go:141] libmachine: (multinode-752665) Calling .GetSSHHostname
	I0830 21:34:42.385191  977701 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:34:42.385575  977701 main.go:141] libmachine: (multinode-752665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:23:77", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:32:03 +0000 UTC Type:0 Mac:52:54:00:73:23:77 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-752665 Clientid:01:52:54:00:73:23:77}
	I0830 21:34:42.385608  977701 main.go:141] libmachine: (multinode-752665) DBG | domain multinode-752665 has defined IP address 192.168.39.20 and MAC address 52:54:00:73:23:77 in network mk-multinode-752665
	I0830 21:34:42.385731  977701 main.go:141] libmachine: (multinode-752665) Calling .GetSSHPort
	I0830 21:34:42.385882  977701 main.go:141] libmachine: (multinode-752665) Calling .GetSSHKeyPath
	I0830 21:34:42.386010  977701 main.go:141] libmachine: (multinode-752665) Calling .GetSSHUsername
	I0830 21:34:42.386109  977701 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665/id_rsa Username:docker}
	I0830 21:34:42.471714  977701 ssh_runner.go:195] Run: systemctl --version
	I0830 21:34:42.477551  977701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:34:42.490281  977701 kubeconfig.go:92] found "multinode-752665" server: "https://192.168.39.20:8443"
	I0830 21:34:42.490306  977701 api_server.go:166] Checking apiserver status ...
	I0830 21:34:42.490333  977701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:34:42.501542  977701 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1107/cgroup
	I0830 21:34:42.509886  977701 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod063d73d4de1cf2feb4ba920354d72513/crio-a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6"
	I0830 21:34:42.509966  977701 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod063d73d4de1cf2feb4ba920354d72513/crio-a13f9a498d1111552cfc7e46a3d6df45cee3acb8398c2bef5fa5d20b7cd537f6/freezer.state
	I0830 21:34:42.518530  977701 api_server.go:204] freezer state: "THAWED"
	I0830 21:34:42.518560  977701 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I0830 21:34:42.523922  977701 api_server.go:279] https://192.168.39.20:8443/healthz returned 200:
	ok
	I0830 21:34:42.523949  977701 status.go:421] multinode-752665 apiserver status = Running (err=<nil>)
	I0830 21:34:42.523962  977701 status.go:257] multinode-752665 status: &{Name:multinode-752665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0830 21:34:42.523987  977701 status.go:255] checking status of multinode-752665-m02 ...
	I0830 21:34:42.524323  977701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:34:42.524366  977701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:34:42.539475  977701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0830 21:34:42.539940  977701 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:34:42.540464  977701 main.go:141] libmachine: Using API Version  1
	I0830 21:34:42.540491  977701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:34:42.540889  977701 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:34:42.541087  977701 main.go:141] libmachine: (multinode-752665-m02) Calling .GetState
	I0830 21:34:42.542619  977701 status.go:330] multinode-752665-m02 host status = "Running" (err=<nil>)
	I0830 21:34:42.542642  977701 host.go:66] Checking if "multinode-752665-m02" exists ...
	I0830 21:34:42.543014  977701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:34:42.543072  977701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:34:42.557815  977701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38255
	I0830 21:34:42.558250  977701 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:34:42.558749  977701 main.go:141] libmachine: Using API Version  1
	I0830 21:34:42.558772  977701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:34:42.559073  977701 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:34:42.559251  977701 main.go:141] libmachine: (multinode-752665-m02) Calling .GetIP
	I0830 21:34:42.562168  977701 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:34:42.562586  977701 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:34:42.562616  977701 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:34:42.562774  977701 host.go:66] Checking if "multinode-752665-m02" exists ...
	I0830 21:34:42.563108  977701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:34:42.563151  977701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:34:42.579518  977701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0830 21:34:42.579946  977701 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:34:42.580483  977701 main.go:141] libmachine: Using API Version  1
	I0830 21:34:42.580512  977701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:34:42.580816  977701 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:34:42.581027  977701 main.go:141] libmachine: (multinode-752665-m02) Calling .DriverName
	I0830 21:34:42.581202  977701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 21:34:42.581223  977701 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHHostname
	I0830 21:34:42.583944  977701 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:34:42.584375  977701 main.go:141] libmachine: (multinode-752665-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:5c:12", ip: ""} in network mk-multinode-752665: {Iface:virbr1 ExpiryTime:2023-08-30 22:33:13 +0000 UTC Type:0 Mac:52:54:00:63:5c:12 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-752665-m02 Clientid:01:52:54:00:63:5c:12}
	I0830 21:34:42.584424  977701 main.go:141] libmachine: (multinode-752665-m02) DBG | domain multinode-752665-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:63:5c:12 in network mk-multinode-752665
	I0830 21:34:42.584528  977701 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHPort
	I0830 21:34:42.584711  977701 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHKeyPath
	I0830 21:34:42.584848  977701 main.go:141] libmachine: (multinode-752665-m02) Calling .GetSSHUsername
	I0830 21:34:42.584967  977701 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17114-955377/.minikube/machines/multinode-752665-m02/id_rsa Username:docker}
	I0830 21:34:42.671670  977701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:34:42.684260  977701 status.go:257] multinode-752665-m02 status: &{Name:multinode-752665-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0830 21:34:42.684297  977701 status.go:255] checking status of multinode-752665-m03 ...
	I0830 21:34:42.684724  977701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0830 21:34:42.684783  977701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0830 21:34:42.700150  977701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I0830 21:34:42.700587  977701 main.go:141] libmachine: () Calling .GetVersion
	I0830 21:34:42.701202  977701 main.go:141] libmachine: Using API Version  1
	I0830 21:34:42.701227  977701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0830 21:34:42.701552  977701 main.go:141] libmachine: () Calling .GetMachineName
	I0830 21:34:42.701712  977701 main.go:141] libmachine: (multinode-752665-m03) Calling .GetState
	I0830 21:34:42.703294  977701 status.go:330] multinode-752665-m03 host status = "Stopped" (err=<nil>)
	I0830 21:34:42.703308  977701 status.go:343] host is not running, skipping remaining checks
	I0830 21:34:42.703317  977701 status.go:257] multinode-752665-m03 status: &{Name:multinode-752665-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.98s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 node start m03 --alsologtostderr
E0830 21:34:50.422549  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-752665 node start m03 --alsologtostderr: (28.495731049s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-752665 node delete m03: (1.212867088s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (447.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-752665 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0830 21:49:22.736082  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:51:49.715987  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:51:57.076935  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 21:54:22.734719  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 21:55:00.124351  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-752665 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.980312378s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-752665 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (447.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-752665
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-752665-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-752665-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.386552ms)

                                                
                                                
-- stdout --
	* [multinode-752665-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-752665-m02' is duplicated with machine name 'multinode-752665-m02' in profile 'multinode-752665'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-752665-m03 --driver=kvm2  --container-runtime=crio
E0830 21:56:49.715267  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 21:56:57.076278  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-752665-m03 --driver=kvm2  --container-runtime=crio: (48.000124179s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-752665
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-752665: exit status 80 (235.984305ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-752665
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-752665-m03 already exists in multinode-752665-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-752665-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-752665-m03: (1.016489358s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.39s)

                                                
                                    
x
+
TestScheduledStopUnix (117.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-261533 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-261533 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.07838116s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-261533 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-261533 -n scheduled-stop-261533
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-261533 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-261533 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-261533 -n scheduled-stop-261533
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-261533
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-261533 --schedule 15s
E0830 22:01:49.715933  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0830 22:01:57.077579  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-261533
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-261533: exit status 7 (85.70186ms)

                                                
                                                
-- stdout --
	scheduled-stop-261533
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-261533 -n scheduled-stop-261533
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-261533 -n scheduled-stop-261533: exit status 7 (83.293874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-261533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-261533
--- PASS: TestScheduledStopUnix (117.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (199.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-412165 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-412165 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.700235759s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-412165
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-412165: (5.444386248s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-412165 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-412165 status --format={{.Host}}: exit status 7 (98.769654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-412165 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-412165 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.201685809s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-412165 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-412165 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-412165 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (119.357128ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-412165] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-412165
	    minikube start -p kubernetes-upgrade-412165 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4121652 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-412165 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-412165 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-412165 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.752799335s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-412165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-412165
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-412165: (1.183307245s)
--- PASS: TestKubernetesUpgrade (199.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-132469 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-132469 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (94.726871ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-132469] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (104.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-132469 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-132469 --driver=kvm2  --container-runtime=crio: (1m43.788490816s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-132469 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (104.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-132469 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-132469 --no-kubernetes --driver=kvm2  --container-runtime=crio: (5.552888399s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-132469 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-132469 status -o json: exit status 2 (273.609951ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-132469","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-132469
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-132469: (1.031429688s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-132469 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0830 22:04:22.734358  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-132469 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.759063387s)
--- PASS: TestNoKubernetes/serial/Start (50.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-132469 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-132469 "sudo systemctl is-active --quiet service kubelet": exit status 1 (246.202962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-132469
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-132469: (1.40421481s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-132469 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-132469 --driver=kvm2  --container-runtime=crio: (26.258649673s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.26s)

                                                
                                    
x
+
TestPause/serial/Start (88.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-820510 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-820510 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m28.84786493s)
--- PASS: TestPause/serial/Start (88.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-132469 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-132469 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.815861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-051361 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-051361 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (143.963681ms)

                                                
                                                
-- stdout --
	* [false-051361] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:05:45.654051  988671 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:05:45.654250  988671 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:05:45.654260  988671 out.go:309] Setting ErrFile to fd 2...
	I0830 22:05:45.654267  988671 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:05:45.654567  988671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-955377/.minikube/bin
	I0830 22:05:45.655396  988671 out.go:303] Setting JSON to false
	I0830 22:05:45.656840  988671 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13693,"bootTime":1693419453,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0830 22:05:45.656934  988671 start.go:138] virtualization: kvm guest
	I0830 22:05:45.659729  988671 out.go:177] * [false-051361] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0830 22:05:45.661777  988671 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:05:45.661848  988671 notify.go:220] Checking for updates...
	I0830 22:05:45.663364  988671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:05:45.665069  988671 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-955377/kubeconfig
	I0830 22:05:45.666535  988671 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-955377/.minikube
	I0830 22:05:45.668148  988671 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0830 22:05:45.669885  988671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:05:45.672364  988671 config.go:182] Loaded profile config "force-systemd-env-134135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:05:45.672549  988671 config.go:182] Loaded profile config "pause-820510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:05:45.672632  988671 config.go:182] Loaded profile config "stopped-upgrade-184733": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0830 22:05:45.672772  988671 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:05:45.715792  988671 out.go:177] * Using the kvm2 driver based on user configuration
	I0830 22:05:45.717492  988671 start.go:298] selected driver: kvm2
	I0830 22:05:45.717513  988671 start.go:902] validating driver "kvm2" against <nil>
	I0830 22:05:45.717527  988671 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:05:45.719812  988671 out.go:177] 
	W0830 22:05:45.721159  988671 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0830 22:05:45.722573  988671 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-051361 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-051361" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-051361

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-051361"

                                                
                                                
----------------------- debugLogs end: false-051361 [took: 3.756358458s] --------------------------------
helpers_test.go:175: Cleaning up "false-051361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-051361
--- PASS: TestNetworkPlugins/group/false (4.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (393.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-250163 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-250163 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (6m33.748108642s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (393.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (163.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-698195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-698195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (2m43.87373332s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (163.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-184733
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (125.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-208903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
E0830 22:09:22.734794  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-208903 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (2m5.407038325s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (125.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-791007 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-791007 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m40.543996742s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-698195 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b6f48515-4a8e-4f84-8760-4f3b9b12b4d5] Pending
helpers_test.go:344: "busybox" [b6f48515-4a8e-4f84-8760-4f3b9b12b4d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b6f48515-4a8e-4f84-8760-4f3b9b12b4d5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.030023969s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-698195 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-208903 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aad5edde-92c9-49b4-8832-233ac1fce66b] Pending
helpers_test.go:344: "busybox" [aad5edde-92c9-49b4-8832-233ac1fce66b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [aad5edde-92c9-49b4-8832-233ac1fce66b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.377890705s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-208903 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-698195 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-698195 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.435691274s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-698195 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-208903 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-208903 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.347838411s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-208903 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-791007 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eef8793a-8077-4a8c-b8c2-ec7c1fb625ec] Pending
helpers_test.go:344: "busybox" [eef8793a-8077-4a8c-b8c2-ec7c1fb625ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eef8793a-8077-4a8c-b8c2-ec7c1fb625ec] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.024433435s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-791007 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-791007 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-791007 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.089433903s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-791007 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (666.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-698195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-698195 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (11m6.490994345s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-698195 -n no-preload-698195
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (666.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-250163 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [768aa89b-334a-4646-91f8-0c6c2e62e1c9] Pending
helpers_test.go:344: "busybox" [768aa89b-334a-4646-91f8-0c6c2e62e1c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [768aa89b-334a-4646-91f8-0c6c2e62e1c9] Running
E0830 22:14:22.734891  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.03917803s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-250163 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-250163 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-250163 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (572.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-791007 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-791007 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (9m32.420843481s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-791007 -n default-k8s-diff-port-791007
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (572.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (576.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-250163 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0830 22:19:05.785666  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
E0830 22:19:22.734336  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/ingress-addon-legacy-306023/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-250163 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (9m35.901587565s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-250163 -n old-k8s-version-250163
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (576.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-618803 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-618803 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m2.987300887s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m43.455055363s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-618803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-618803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.489559344s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-618803 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-618803 --alsologtostderr -v=3: (12.132019905s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-618803 -n newest-cni-618803
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-618803 -n newest-cni-618803: exit status 7 (88.06045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-618803 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (55.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-618803 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-618803 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (55.020057667s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-618803 -n newest-cni-618803
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (55.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-618803 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-618803 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-618803 -n newest-cni-618803
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-618803 -n newest-cni-618803: exit status 2 (257.702906ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-618803 -n newest-cni-618803
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-618803 -n newest-cni-618803: exit status 2 (263.476172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-618803 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-618803 -n newest-cni-618803
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-618803 -n newest-cni-618803
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m9.377178076s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (119.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m59.980125004s)
--- PASS: TestNetworkPlugins/group/calico/Start (119.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-051361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-051361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f8vwn" [158306f2-25b8-4ce0-a2f8-20a33ba3edca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f8vwn" [158306f2-25b8-4ce0-a2f8-20a33ba3edca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.010467387s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-051361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (99.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m39.382375885s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (99.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (136.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0830 22:41:06.873836  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
E0830 22:41:06.879141  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
E0830 22:41:06.889390  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
E0830 22:41:06.909659  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
E0830 22:41:06.949955  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
E0830 22:41:07.030523  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
E0830 22:41:07.190677  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
E0830 22:41:07.511328  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
E0830 22:41:08.152031  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
E0830 22:41:09.432988  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
E0830 22:41:11.993530  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m16.397804777s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (136.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-27vkf" [41344912-bd9c-45bd-8adf-07224b7bf7b7] Running
E0830 22:41:17.114365  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.024987509s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-051361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-051361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rbxjs" [18fa36b5-ac45-45e7-b671-22aea9656871] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rbxjs" [18fa36b5-ac45-45e7-b671-22aea9656871] Running
E0830 22:41:27.354996  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/no-preload-698195/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.013585747s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-051361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0830 22:41:49.715202  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/functional-944257/client.crt: no such file or directory
E0830 22:41:57.076155  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/addons-585092/client.crt: no such file or directory
E0830 22:42:02.638107  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/old-k8s-version-250163/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m26.290668609s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5xw8d" [64bb7293-210b-46c4-94db-2a65a10b0bf0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.039974973s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-051361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-051361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pctnh" [307845e0-ac8a-4b91-aa24-18f8f2fce053] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pctnh" [307845e0-ac8a-4b91-aa24-18f8f2fce053] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.026248891s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-051361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-051361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6skx8" [cc3c9fa7-1505-4aee-83d5-3a629f703c65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6skx8" [cc3c9fa7-1505-4aee-83d5-3a629f703c65] Running
E0830 22:42:29.675809  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
E0830 22:42:29.681305  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
E0830 22:42:29.691611  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
E0830 22:42:29.711948  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
E0830 22:42:29.752411  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
E0830 22:42:29.833355  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
E0830 22:42:29.994454  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
E0830 22:42:30.315394  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
E0830 22:42:30.956388  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
E0830 22:42:32.236656  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
E0830 22:42:34.796837  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.013866933s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-051361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-051361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0830 22:42:50.157668  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-051361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m3.580465097s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-051361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-051361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v7nwk" [006844a6-3232-429f-889e-9b40a78993dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0830 22:43:10.638937  962621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-955377/.minikube/profiles/default-k8s-diff-port-791007/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-v7nwk" [006844a6-3232-429f-889e-9b40a78993dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.020945735s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qb6bc" [b9f0cce5-17cf-4866-9171-7a6a75cdd20f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.399926488s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-051361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-051361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8dzgg" [74741e72-8990-4491-a068-0059e95593ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8dzgg" [74741e72-8990-4491-a068-0059e95593ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.012385782s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-051361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-051361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-051361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-051361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h6hzm" [b1ea38aa-b523-4f9b-9eee-cf7836d74090] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h6hzm" [b1ea38aa-b523-4f9b-9eee-cf7836d74090] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.010648934s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-051361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-051361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (36/288)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.1/cached-images 0
13 TestDownloadOnly/v1.28.1/binaries 0
14 TestDownloadOnly/v1.28.1/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
111 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
112 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
113 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
114 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
115 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
116 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
146 TestImageBuild 0
179 TestKicCustomNetwork 0
180 TestKicExistingNetwork 0
181 TestKicCustomSubnet 0
182 TestKicStaticIP 0
213 TestChangeNoneUser 0
216 TestScheduledStopWindows 0
218 TestSkaffold 0
220 TestInsufficientStorage 0
224 TestMissingContainerUpgrade 0
232 TestStartStop/group/disable-driver-mounts 0.15
248 TestNetworkPlugins/group/kubenet 4.1
256 TestNetworkPlugins/group/cilium 3.88
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-883991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-883991
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-051361 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-051361" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-051361

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-051361"

                                                
                                                
----------------------- debugLogs end: kubenet-051361 [took: 3.926760146s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-051361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-051361
--- SKIP: TestNetworkPlugins/group/kubenet (4.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-051361 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-051361" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-051361

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-051361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-051361"

                                                
                                                
----------------------- debugLogs end: cilium-051361 [took: 3.724476273s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-051361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-051361
--- SKIP: TestNetworkPlugins/group/cilium (3.88s)

                                                
                                    
Copied to clipboard